Send in your ideas. Deadline February 1, 2026

Changelog: Tracking changes from version 1.0 to 1.1 of the GenAI Policy

This page displays the changes made in the GenAI policy from version 1.0 to 1.1. Removed text is indicated with strike-through. Newly added text is marked in red. The currently valid policy is version 1.1. The older version 1.0 is archived.

Version control

Date Version Comment
December 8, 2025 Version 1.0 Initial release
December 8, 2025 Version 1.0 Corrected one typo ("artifical") & fixed link to the footnote
January 26, 2026 Version 1.1 Adjustments based on feedback from grantees

Generative Artificial Intelligence (GenAI) tools are rapidly emerging across the ecosystem. While we recognize the interest – and at times share the curiosity – surrounding these technologies, it is essential that they be used responsibly, transparently, and in full accordance with our established principles.

This policy describes how GenAI— including tools such as large language models and code assistants—may be used in applications submitted to NLnet and in all work delivered within NLnet-funded projects, covering both the preparation of proposals and the execution of funded activities. The policy concerns GenAI specifically and does not regulate other forms of automation.

Foundation of the policy

This policy is grounded in longstanding principles that apply to all NLnet-funded work. From these fundamental principles we have deduced what we consider common sense consequences with regards to the use of GenAI.

Fundamental principles

  1. FLOS licence: All projects must be free/libre/open source: all scientific outcomes must be published as open access, and any software and hardware developed must be published under a recognised free and open source licence in its entirety.
  2. No misrepresentation: Grantees and applicants should not claim work as their own, if it is not. This has always been true and GenAI doesn’t change that.
  3. Project quality: Grantees are expected to deliver project outcomes to the best of their ability. Tools may assist but do not replace human responsibility for correctness, clarity, and reproducibility.

Use of GenAI in the application process

We encourage applicants to trust their own skills and write their own proposals. That being said, applicants may use GenAI tools in preparing applications, but any such use must be disclosed.Applicants may use GenAI tools in preparing proposals, but any such use must be disclosed. This includes drafting, translation, or summarisation.This It applies both to written proposals and to materials provided during interactive evaluation. Disclosure allows evaluators to understand how the proposal was produced and ensures fairness.

How to disclose: see the section on Transparency & logging

How to disclose?

If GenAI is used in the application process a prompt provenance log must be maintained. This log should list:

  • the model used,
  • dates and times of prompts,
  • the prompts themselves,
  • the unedited output.

Instructions about how to submit the prompt log for applications are provided on the proposal form. Minor uses (e.g., spellchecking) do not require logging.

Use of GenAI in project development

  • Grantees must ensure that all submitted work can be legally published under a FLOS licence. This includes verifying that GenAI-assisted outputs do not reproduce copyrighted or incompatible material.
    • Example: when using a code assistant, check the assistant’s terms of use, and ensure that outputs are not reconstructed from copyrighted sources.
    • Example: Under EU law, purely AI-generated outputs without substantial human intellectual contribution are not eligible for copyright protection [Note]. In any case, outcomes purely generated by AI are not allowed to be submitted as work eligible for payment (as part) of the grant.

  • Grantees must not present AI-generated content as if it were their own human-authored work.
    • Explanation: When we provide a grant to a person to develop a project, we expect that person to do the work. They should not outsource the work to another person while pretending they did it themselves. Similarly, grantees should not deliver GenAI outcomes and pretend it was their own human effort. Human contributors remain accountable for accuracy, originality, and integration of GenAI-supported work.

  • Use of GenAI must not reduce the quality, clarity, reliability, or reproducibility of the work.
    • Explanation: Tools may assist, but human responsibility for quality remains.Human contributors are are expected to understand and be able to explain design and code decisions.

  • It is allowed to work on the topic of GenAI itself within the scope of a grant, but only if this is explicitly part of approved work.
    • Explanation: We are not against GenAI, so applicants should not hesitate to submit proposals that involve work on GenAI. However, work on GenAI only falls within scope if it is part of the approved project plan.

Transparency & logging for project development

Use of GenAI should be disclosed and transparent. For any substantive use of GenAI that materially affects outputs, public disclosure is required, making it available to both users and contributors. a prompt provenance log must be maintained. This log should list:

  • The general stance toward the use of GenAI within a project should be disclosed and transparent for the public by providing a broad description.
    • Example: A codebase declares, typically in its ‘readme’, broadly how GenAI is used, e.g. for logic/boilerplate/tests/documentation/…. This can help set expectations for both users and contributors.
    • Example: A project publishes its own policy for contributors, outlining its do's and don't with regards to the use of GenAI.
  • Generated content should be marked as such. When adding (partially) generated code, make sure the provenance is clear for each such contribution. Specify which model was used, (including version), and how it was used. Provide the used prompts/interactions and resulting output, or a summary thereof.
    • Example: When using git, distinguish commits that add generated code and include the used model and prompts in the commit message. Consider choosing a development tool that helps you create commits, and auto-fills the relevant information. See [Note 2] for example commits.
    • Make sure to provide the information in a logical place where it can easily be found. Avoid hosting it on third-party platforms that require a log-in or may disappear over time.
  • If GenAI is not used for generating code but only for tasks like testing or creating documentation, it suffices to provide a general description of the use in the ‘readme’. More detailed logging on a per commit basis is preferred but not required.

Alternative methods for logging

The goal of disclosure is to inform NLnet, users and contributors about the extend to which GenAI was used to generate project results. If you prefer to use different methods for logging with equivalent results, this can be acceptable too. Use common sense to determine such equivalence and make sure you are able to answer questions about the use of GenAI from our team.

Based on the use case, this log should be made available in the following ways:
- In the case of project outcomes, logs should be made available publicly in a location that is easily discoverable for users.
- In the case of a grant application being submitted through the proposal form, instructions about how to disclose are provided on the form.
- In the case of a grant application which is still under confidential evaluation, the prompt provenance log should be provided as a separate attachment to the application or emails.
Minor uses (e.g., spellchecking) do not require logging.

Exceptions for grantees with active projects during the publication of the policy

We ask everyone to make an effort to uphold our request for transparency and logging. However, we also understand that for grantees with ongoing projects, this is a new requirement while work was already underway. Grantees of ongoing projects who feel that none of disclosure options offered above will work for them, can propose a personalised plan for transparency to their contact person at NLnet.
An ongoing project in this context, is a project for which the Memorandum of Understanding was signed before December 8, 2025, the date when this policy came into force. Note that this is project-based and does not apply to returning grantees with projects starting after this date.

Note that for ongoing projects, logging is not required retroactively. It applies to milestones which were started after this policy came into force (December 8, 2025).

Non-Compliance

Failure to comply with the above policy may result in rejection of the proposal or ultimately in the termination of the running grant.

Scope of this policy

This policy explicitly deals with GenAI only (such as Large Language Models). NLnet is a strong proponent of automation and of deterministic and reproducible generation of source code, formal and symbolic proofs, etc. based on specifications and scientific and engineering rigour. Similarly, it does not in any way seek to prevent the use of other forms of machine learning, fuzz testing or other beneficial use cases. However, when in doubt please contact us.

Note 1

See: Generative AI and Copyright page 93, a report requested by the European Parliament's Committee on Legal Affairs. The entire quote reads:
"Given this framework, it follows that purely AI-generated outputs—those created automatically by an AI system without substantial human intervention—are not eligible for copyright protection in the EU. Such outputs are considered to fall into the public domain, making them freely available for anyone to use, reproduce, or adapt without seeking permission or providing attribution. The legal and commercial implications of this are significant. For creators and companies investing in AI systems that generate music, art, or text, there is no proprietary right over the final output unless a human has contributed in a way that meets the “intellectual creation” standard."

Note 2

Example commits:

  • Author: Harry Hacker hh@example.org
    Date: Sun Jan 18 10:32:15 2026
    Fix compliance tests
    Fix several mistakes in generated code, make it compile; manually verify each test with RFC123 specification.

  • Author: Harry Hacker with CodeLLM-3.4 hh@example.org
    Date: Sun Jan 18 10:52:08 2026
    Generate compliance tests
    Prompt: Generate tests for compliance with RFC123 messages.
    Output: (this commit)