Send in your ideas. Deadline October 1, 2025

Policy on the use of generative AI

Version 1.0

Valid as of: 2025/08

This document outlines the policy for stichting NLnet with regards to the use of generative artificial intelligence (such as Large Language Models) within both grant proposals and within the execution of actual development effort within grants themselves.

NLnet provides funding to human researchers and developers that work in the public interest. NLnet takes great pride in the craftsmanship of the communities it supports, and in their collaborative intellectual achievements. We recognise and appreciate the personal sacrifices made in pursuit of delivering a more open information society.

Along with IEEE and its Code of Ethics, we believe in a world in which developers, engineers, scientists are respected for their skills and their exemplary ethical behavior. Generative artificial intelligence, at least in its current state, has no sense of ethics, truth or responsibility — nor does it need to eat or feed households.

NLnet believes in trustworthiness, and — when it comes to software, firmware and gateware — in deterministic solutions that are robust, resilient and human-centric. While many individuals and organisations these days employ generative artificial intelligence in an attempt to reduce human effort and increase overall output volume at their end, they rarely consider the effect of low quality interaction and information on others — and on society as a whole.

In the absence of actual non-human intelligence and trustworthy reasoning, we believe this is not a path our foundation should be pursuing or encouraging for now. Human life is too short for intelligent people to deal with code hallucinations and an endless stream of machine-generated verbosity with hidden defects. As biased gatekeepers, generative AI models introduce a fundamental uncertainty to learning which information retrieval technology does not have. If anything, we seek to mitigate any negative effects. Generative AI has its use cases but can be actively harmful in many cases where it is currently pushed, and we have no interest in further promotion of this technology. Our chosen approach continues to be to fund and nurture human talent and to build human capacity.

With this in mind we set the following policy rules:

  • it is allowed to work on the topic of generative AI itself within the scope of a grant, if and only if this is explicitly part of approved work.

  • it is not allowed to use generative AI during the writing of proposals without explicitly mentioning this, nor when providing answers during the interactive evaluation of proposals.

  • unless explicitly agreed otherwise in writing, it is not allowed to use generative AI for writing software source code, make hardware designs, write documentation or execute any other task designated to human effort in the plan.

  • If an applicant or grantee becomes aware of the fact that others are making significant contributions produced by generative AI to the codebase(s) they are working on, which overlaps with or impacts planned effort within an active grant, they shall immediately inform NLnet (as soon as possible, and before their next payment request to NLnet) to avoid unjustified or problematic claims. We consider the output of generative AI as funded already, and claiming budget for (part of) the work done by an LLM as equivalent to double funding.

  • efforts that allow a significant amount of output of generative AI in their code base (even outside of the scope of grants), no longer qualify to benefit from any auxiliary services involving human experts as provided to our grantees by us and our partners — such as security audits, user interface design and legal support.

Note that the above explicitly deals with generative AI only. We are strong proponents of automation and of deterministic and reproducible generation of source code, formal and symbolical proofs, etc. based on specifications and scientific and engineering rigor. Similarly, it does not in any way seek to prevent the use of other forms of machine learning, fuzz testing or other beneficial use cases. However, when in doubt please contact us.

Transparency

Use of generative AI in engineering should be transparent. Wherever generative AI is allowed and used, a prompt provenance log must be provided alongside any outputs. Such a provenance log shall list the engine(s) used, dated prompts and the complete unedited answers from the LLM. This log should be made available publicly in a location that is easily discoverable for users. In the case of a grant application which is still under confidential evaluation, the prompt provenance log should be provided as a separate attachment to the application or emails.

Non-compliance and consequences

Failure to comply with the above policy rules will in the worst case be seen as fraud, and may result in immediate termination of any running grants with the applicant in question. If relevant, falsely claimed budget will need to be returned. There might also be public exposure, as users deserve to be informed about potentially unreliable code. A violation might result in up to ten years of exclusion from any of our grant programmes. NLnet will inform an applicant or grantee if a violation has been observed, and provide room to provide evidence to the contrary within one month. After this, it is up to NLnet to decide. There will be no correspondence on the matter.