Send in your ideas. Deadline October 1, 2025

NLnet policy on use of AI

Stichting NLnet has published its Policy on the topic of the use of generative AI within grant applications and within projects supported by the foundation. As grantmaker, the foundation sees a responsibility to address the issue of degraded quality through the use of AI and AI plagiarism within its portfolio.

Key part of the policy is an obligation for applicants and grantees to provide upfront transparency where and how generative AI is being used, and to apply strong hygiene when doing so by means of a prompt provenance log. This gives the foundation the opportunity to properly review the merits of any work proposed and delivered, and justify the allocation of grants to NLnet’s donors. The foundation also finds it critical that other developers and end users are aware if code was injected with minimal or no human supervision. Also, from a legal point of view AI generated code does not have copyright, unless it directly plagiarises existing copyrighted works. Humans should do due diligence whether such AI plagiarism has taken place.

Within our grant programmes we work with some of the very best software engineers and researchers around, and we take great pride in that, says Michiel Leenaars, director of Strategy at NLnet foundation. One of the ailments of the use of generative AI in technology development is that it plays into the urge to develop technology faster and with limited human resources than would be responsible. We all know the combination of haste and skill issues are a fast track to bugs and systemic flaws, and we just cannot afford that. We therefore choose to focus on quality, trustworthiness, robustness and longevity — which we believe to be the long term needs of any open information society.

In cases when generative AI is in fact suitable and being used, the foundation ask developers to clearly mark and isolate code as such, and to provide a full public log (“prompt provenance log” or “prompt log”) of the interaction with generative AI alongside the regular version controlled source code. Tools do not have an ego and do not expect privacy or confidentiality, and this extends to developers depending on such tools. If we want to avoid recursive bitrot, we need to be transparent about provenance and about where value is added.

The foundation will also restrict access to auxiliary support services from human experts to code bases which are untainted by generative AI, to most efficiently use the limited available resources. This includes situations where others are making significant contributions produced by generative AI to the codebase(s) grantees are working on.

The foundation stresses that its new policy explicitly deals with generative AI only, and that it continues to be a strong proponent of automation and of deterministic and reproducible generation of e.g. source code, formal and symbolical proofs, based on specifications and scientific and engineering rigor. Similarly, it does not in any way seek to prevent the use of other forms of machine learning, fuzz testing or other beneficial use cases.