Generative AI in research funding
Three principles for the use of generative AI in the writing phase of the grant proposal
These three principles are based on the Amsterdam UMC Research Code; Basic Principles of Research Code Amsterdam UMC
1. Principle #1: Factual Quality and Accountability
The applicant is ultimately accountable for the text of the proposal. Although generative AI can be a supportive tool for researchers, there are drawbacks in regards to inaccuracies and misinformation and plagiarism. Therefore, applicants must ensure that the content and citations generated by the AI tool are accurate, valid, and appropriate, rectifying any errors or inconsistencies discovered.
2. Principle #2 Responsible use concerning data protection and intellectual property
A grant proposal is always considered confidential, data leaks could impair future patent rights. Everything that is shared with many generative AI tools (such as ChatGPT), is stored and can be used for training purposes (at least specially if a free version is being used). Therefore, never share personal data from research subjects or other sensitive or confidential information regarding your research.
3. Principle #3 Transparency
Check the funding bodies guidelines on the generative AI use in the grant proposal. Applicants need to document the use of AI and other LLMs in the generation of the grant proposal validating the source in the grant application. In most instances, for Horizon Europe proposals, it is requested to be transparent and to report the use of generative AI (which tools and how they were utilized).
For internal grants, such as the Amsterdam UMC PostDoc Career Bridging Grant, applicants will be asked to acknowledge the use of generative AI in the application form.
Funding bodies have published guidance notes in the use of GenAI:
Both NWO (NWO komt met voorlopige richtlijnen gebruik AI | NWO) and ZonMW (Niet toestaan gebruik van generatieve AI in ZonMw beoordelingsprocessen | ZonMw), forbid the use of GenAI in the assessment of research grant applications.
EU Horizon Europe has published guidelines on the use of GenAI for both the review process but also during grant writing: Living guidelines on the responsible use of generative AI in research | Research and innovation (europa.eu).
NIH policy also prohibits the use of GenAI to analyze and formulate peer review critiques for grant applications: focuses on confidentiality during the review process of grant proposals: The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process
- KU Leuven - clear website with guidelines and examples of responsible use: Use of GenAI (including LLMs) in the different phases of research - Research (kuleuven.be)
- Cornell University - a report by the Cornell University task force, also discusses GenAI at different stages of research, with examples of its use: 'Generative AI in Academic Research'