Interdisciplinary experts from Amsterdam UMC, and the University of Amsterdam, two institutions within the Amsterdam AI ecosystem, have published their 'living guidelines' for responsible use of generative AI today in Nature.

Lead author Claudi Bockting, professor of Clinical Psychology of Psychiatry at Amsterdam UMC and co-director of the Centre for Urban Mental Health believes that, "AI tools could flood the internet with misinformation and ‘deep fakes’ that can be indistinguihable from real individuals. This could over time, erode trust between people, in politicians, institutions, and science. Independent scientists must take the lead in testing, proving, and improving the safety and security of generative AI. However, most scientists don’t have access to the facilities or public funding to develop or evaluate generative AI tools."

The guidelines were crafted after two international summits with members of international organizations such as the International Science Council, the University Based Institutes for Advanced Study, the European Academy of Sciences and Arts, and members of global institutions like UNESCO and United Nations. This initiative emerges from a pressing need to ensure scientific and societal oversight in a swiftly evolving sector.

In the view of the authors, oversight should be modeled on a scientific institute. With a focus on quantitative measurements of real-world impacts, both positive and potentially detrimental, and apply the scientific method in its evaluations. Maintaining a distance from dominating commercial interests, the consortium prioritizes public welfare and the authenticity of scientific research.

This initiative is a proactive response to potential gaps in current governance, offering a balanced perspective amidst the slow pace of governmental regulations, the fragmentation of guideline developments, and the unpredictability of self-regulation by major tech entities.

Living Guidelines

The 'living guidelines' revolve around three key principles:

Accountability: Advocating for a human-augmented approach, the consortium believes that while generative AI can assist in low-risk tasks, essential endeavors like scientific manuscript preparation or peer reviews should retain human oversight.

Transparency: Clear disclosure of generative AI use is imperative. This will allow the broader scientific community to assess the implications of generative AI on research quality and decision-making. Furthermore, the consortium urges AI tool developers to be transparent about their methodologies, enabling comprehensive evaluations.

Independent Oversight: Given the vast financial implications of the generative AI sector, relying solely on self-regulation is not feasible. External, independent objective audits are crucial to ensure ethical and high-quality use of AI tools.

The proposed scientific body must have sufficient computing power to run full-scale models, and enough information on source cases, to judge how AI-tools were trained, even before they are released. Effective guidelines will require international funding and broad legal endorsement and will only work in collaboration with tech industry leaders while at the same time safeguarding its independence. The authors underscore the urgent need for their proposed scientific body, which can also address any emergent or unresolved issues in the domain.

In essence, the consortium emphasizes the need for focused investments in an expert committee and oversight body. This ensures that generative AI progresses responsibly, striking a balance between innovation and societal well-being.

AI and Amsterdam UMC

Responsible and human-oriented use of AI is one of the principles in the policy at Amsterdam UMC, also in the collaboration with partners in the region in Amsterdam AI. The discussion about the ethical and legal aspects of AI is an important part of both Health Research Infrastructure and Amsterdam AI. These principles therefore also apply to generative AI. Mat Daemen, vice dean of research at Amsterdam UMC: “The Nature article advocates drawing up living guidelines for the use of generative AI with three principles: accountability shortages, providing access to all information and independent supervision. That seems like a very sensible route to me.”

About the Consortium
The consortium comprises AI experts, computer scientists, and specialists in the psychological and social impacts of AI from AmsterdamUMC, IAS and the Science faculty of UvA, included in the Amsterdam AI ecosystem and Indiana University (USA). This joint effort, supported by members of global and science institutions, seeks to navigate a future for generative AI that is both innovative and ethically conscious.

Read the full article in Nature.