Generative AI in research funding
Three principles for the use of generative AI in the writing phase of the grant proposal
These three principles are based on the Amsterdam UMC Research Code; Basic Principles of Research Code Amsterdam UMC
1. Accountability and Factual Quality:
Applicants are ultimately responsible for the content of their applications. While GAI can be a supportive tool, it carries risks such as plagiarism, incorrect citations and inaccuracies. Applicants must verify the accuracy, bias, integrity and completeness of AI-generated content and ensure correct citations are added. Generative AI models are not considered authors or co-authors.
2. Data Protection and Confidentiality:
Grant proposals are confidential, and sharing sensitive information with GAI tools can lead to data leaks, potentially affecting future patent rights. Input data, including text and prompts, may be stored and reused by AI applications, risking exposure to other users. Applicants should avoid entering personal data or confidential information into GAI applications.
3. Transparency:
Applicants must be transparent about the use of GAI in their proposals. This includes documenting the use of AI and other language models, validating sources, and mentioning AI use in the references of the application. Funding bodies, such as Horizon Europe, may require explicit acknowledgment of AI use. For internal grants, such as the Amsterdam UMC Postdoc Career Bridging Grant, applicants will be asked to acknowledge the use of generative AI in the application form.
4. Legal and Ethical Compliance:
Applicants must respect (inter)national legislation, including the General Data Protection Regulation, and adhere to the Dutch code of conduct for scientific integrity. They should also follow any specific guidelines from their research institutions.
5. Sustainability
The application of generative AI is still in its infancy. It is important to experiment with possible applications, taking into account the associated risks. Another aspect to take into account is the amount of energy needed to work with AI due to the significant computing capacity required. Working with AI is not sustainable. So make sure you have a clear goal in mind when you experiment, and be conscientious.
Update Situation in 2025
Funding bodies have published guidance notes in the use of GenAI:
- NWO has finalized their guidelines for use of generative AI by applicants, evaluators and NWO employees: NWO-beleid op het gebruik van generatieve artificial intelligence (GAI) | NWO.
- Both NWO and ZonMw (Niet toestaan gebruik van generatieve AI in ZonMw beoordelingsprocessen | ZonMw), do not allow the use of generative AI tools in the assessment of research grant applications.
- EU Horizon Europe has published guidelines on the use of GenAI for both the review process but also during grant writing: Living guidelines on the responsible use of generative AI in research | Research and innovation (europa.eu).
- NIH policy also prohibits the use of GenAI to analyze and formulate peer review critiques for grant applications:focuses on confidentiality during the review process of grant proposals: NOT-OD-23-149: The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process
During the preparation of this memo, no GenAI guidelines have been found for charity foundations such as Hartstichting and KWF, but it is assumed that similar principles described in national funding bodies also apply for these funds.
- Universities have published guidance notes:
- UvA focuses primarily on guidelines in education: AI tools and your studies - student.uva.nl
- VU documents a number of European Universities guidelines on GenAI in research:
Research and AI - More about - Vrije Universiteit Amsterdam (vu.nl)
- Amsterdam UMC: How can you use ChatGPT and AI in a good way? (amsterdamumc.org)
Generative AI in research funding
When utilizing Generative AI (GAI) in the writing phase of grant proposals, applicants must adhere to several principles to ensure responsible and ethical use. These principles are based on the Amsterdam UMC Research Code; Basic Principles of Research Code Amsterdam UMC; Amsterdam UMC Health Data Science core team rules and those of relevant funding bodies. By following these principles, applicants can responsibly integrate generative AI into their research funding applications, ensuring ethical standards and compliance with relevant guidelines.
Amsterdam UMC researchers developing or using AI based tools in their (healthcare) research projects should be aware of new regulations and preferably already comply in the early stages (e.g. grant proposal) of their project to prevent issues and delays later on.
The AI Act (Regulation (EU) 2024/1689) (link here) is the first-ever comprehensive legal framework on artificial intelligence worldwide. It aims to foster trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety, and ethical principles while addressing risks associated with powerful AI models. To oversees enforcement and implementation of the AI Act with the member states, the Commission has established the European AI Office.
While the AI Act entered into force on 1 August 2024, its provisions will be implemented gradually over the following months and years, with different deadlines for various aspects of the regulation. A first important deadline 1 February 2025, after which 'Prohibited AI' (see below) can no longer be used.
A second important deadline is 1 August 2026, when oversight of high-risk AI begins, and significant fines can be imposed if an institution does not comply with the rules.
The EU is drafting a 'General-Purpose AI Code of Practice' for general-purpose AI systems, with input from over 1,000 experts through an iterative process. This document is designed to guide providers of general-purpose AI models, particularly those with systemic (high) risks. A first draft is now accessible, with the final version of the Code expected to be completed by 1 May 2025.
The AI Act introduces a risk-based approach to the regulation of AI systems in regards to its development, marketing and systems. It categorizes AI systems into four levels of risk and establishes different rules for AI systems based on their risk levels. In addition, it requires appropriate AI literacy for people using AI tools or those otherwise involved.
- KU Leuven - clear website with guidelines and examples of responsible use: Use of GenAI (including LLMs) in the different phases of research - Research (kuleuven.be)
- Cornell University - a report by the Cornell University task force, also discusses GenAI at different stages of research, with examples of its use: 'Generative AI in Academic Research'