Generative AI in research funding

Three principles for the use of generative AI in the writing phase of the grant proposal

These three principles are based on the Amsterdam UMC Research Code; Basic Principles of Research Code Amsterdam UMC

1. Principle #1: Factual Quality and Accountability

The applicant is ultimately accountable for the text of the proposal. Although generative AI can be a supportive tool for researchers, there are drawbacks in regards to inaccuracies and misinformation and plagiarism. Therefore, applicants must ensure that the content and citations generated by the AI tool are accurate, valid, and appropriate, rectifying any errors or inconsistencies discovered.

2. Principle #2 Responsible use concerning data protection and intellectual property

A grant proposal is always considered confidential, data leaks could impair future patent rights. Everything that is shared with many generative AI tools (such as ChatGPT), is stored and can be used for training purposes (at least specially if a free version is being used). Therefore, never share personal data from research subjects or other sensitive or confidential information regarding your research.

3. Principle #3 Transparency

Check the funding bodies guidelines on the generative AI use in the grant proposal. Applicants need to document the use of AI and other LLMs in the generation of the grant proposal validating the source in the grant application. In most instances, for Horizon Europe proposals, it is requested to be transparent and to report the use of generative AI (which tools and how they were utilized).

For internal grants, such as the Amsterdam UMC PostDoc Career Bridging Grant, applicants will be asked to acknowledge the use of generative AI in the application form.