Send in your ideas. Deadline February 1, 2026

Write-up on the GenAI Policy Feedback Session

Context

On December 8, 2025, we published version 1.0 of our 'Policy on the use of Generative Artificial Intelligence for NLnet-funded projects'. We notified a subsection of our grantees: the ones which are working on a project that is currently active. We invited them to provide feedback either by e-mail, by attending an online feedback session on December 17, or both. A little over 5 percent of those notified, provided feedback via one or both of those channels.

Version 1.0 of the GenAI policy is now archived.
Version 1.1 was created after the feedback session and is the currently valid version.
A changelog tracking the changes from 1.0 to 1.1 is also published.

Feedback

Feedback received by mail

Positive feedback

Of the written feedback, about half was positive or neutral with responses like:

  • I like your policy. I am also making a policy. Mine is more strict.
  • There is a typo/broken link in the policy.
  • Your policy is not strong enough.
  • Thanks for the clear guidelines.

Negative feedback

The other half of the feedback was 'negative' in that it voiced critique. *All* of the negative feedback was about a single issue: the requirement that if GenAI is used, a prompt provenance log must be maintained and made publicly available.

The reasons given for being against such a log included:

  • Too bureaucratic
  • Goes against NLnet’s policy to shield grantees from paperwork
  • Impossible to track this level of detail
  • Hampers productivity gains
  • Invasion of privacy: it’s like live streaming my coding session
  • Unclear how logging should be done and published

For context: The wording in NLnet's GenAI policy about logging is:
" Use of genAI should be disclosed and transparent. For any **substantive** use of GenAI that materially affects outputs, a prompt provenance log must be maintained. This log should list:

  • the model used,
  • dates of prompts,
  • the prompts themselves,
  • the unedited output.

Other feedback

Question: What is meant by 'substantive use' of GenAI?

One person sent in a question: When should one be logging? Only for prompts? Or also AI completion?

Possible Alternatives for prompt logging

Two people submitted as feedback possible alternatives for prompt logging:

  1. Claude Plan Mode provides a read-only research and planning phase followed by a suggested implementation strategy. The plan provides insight in what GenAI has contributed to the code.
  2. Use Git as a logging mechanism to highlight what is human code and what is AI assisted code. You could put your prompts in the Git commit messages and those kind of things. This would also immediately solve the problem of where to publish the prompt log publicly.

Interactive feedback session

The feedback session was held on December 17, 2025. It lasted an hour. This is a chronological and not a topical write-up of the session.

Framing the main question

Given that the logging requirement was considered the biggest problem, we placed the following question at the center:
What would be a good alternative to a prompt log?
We detailed the question further, adding: We don't want to look at this from the perspective of NLnet only but consider it more broadly. What level of disclosure about GenAI use contributes to a healthy Free/Libre/Open Source (FLOS) ecosystem? To on the one hand provide transparency about the use, while on the other not unnecessarily burdening developers.
What would you want to, or need to know about the use of GenAI in code?. And
What would be a good method disclose the use of GenAI and make transparent how it was used?

We need better tools for logging

One participant had no principled objection against public logging but noted that it should be automated. To just connect a session point to some kind of API that does the archiving. It should be an easy process that unburdens the coder as well as provide a public archive.

Using Git as a logging mechanism

There were several positive responses to the suggestion to use Git for logging.

  • Commits could be marked as Author and Author + AI.
  • A participant noted that genAI should not be given equal authorship credit since it's a tool. A response came from another participant: It need no imply equal, just assisted.
  • Automation: It would help if this type of logging can be easily automated: tooling to switch between crediting work as "Author" and "Author + AI" then you would have nice control to credit all units of the work to the appropriate degree.
  • This suggestion is really promising because you can automate many things in Git and it's integrated into many developer tools.

What are we trying to solve?

A participant asked: 'What is the problem that the GenAI policy is a solution to?'. NLnet answers: multiple problems:

  1. People submitting work that is entirely vibe coded.
  2. Reviewing work has become hard. Before bad code looked bad. But AI generated stuff can appear convincing but actually be bad.
  3. Transparency: for instance: whose idea was it to use the MD5 algorithm for this for this cryptography, was it an GenAI model or the main developer?
NLnet: Many, probably most, grantees use GenAI in a very controlled way. Ideally the logging requirement would make that controlled use apparent in one glance. Not only to us, but also to other people looking at the code. We thought to solve this with a prompt log requirement but that obviously runs into a lot of resistance. So the question in front of us is: what is a good way to provide this transparency.

Possible alternative: GenAI policy per project

Participant suggests: All projects should have their own AI policy. This can be shared with NLnet to provide insight into how GenAI is used (and thus a prompt log is no longer needed). At the same time it is insightful for potential contributors.
NLnet: this is a very interesting suggestion which we will take into consideration. More generally, much of the feedback included descriptions of how people used GenAI in their work. Perhaps that can be a form of disclosure as well: describe how you use GenAI tools.

Example of a GenAI policy for a well-known FOSS project

A well-known, large, successful FOSS project started receiving vibecoded contributions and discussed the use of GenAI. Its community decided: the humans submitting code are always be responsible for it and must fully understand it. That means contributors can be called back on it when something doesn't work, they can explain what it does and they can fix it. They're also are responsible for making sure there is no copyright infringement.

Open source relies on the reputation of the contributor

Participant: an open source project is always a community project. Contributors are doing all their work in the open and they need to gain some kind of reputation. A large PR from a new contributor will be discussed and that will be giving you feedback on the validity of the work. Small projects are most at risk because they don't have strong communities to check contributions.

Goal is to ensure quality of work

Participant: In the final instance we do not want to know about how much GenAI tools someone used but that the quality of the code is good.

  • Therefore, focus on more structured acceptance testing.
    • For example: if someone implements an open standard making sure that if there's a test suite for that standard and that the implementation passes the conformance tests.
  • If there is no test suite for the standard, then maybe support the development of such a suite.
So care less about logging and stuff, and more about how we ensure the quality of work no matter if it's done by a developer or GenAI.
NLnet: Good suggestion to look at the goal rather then the means toward it.

Making a GenAI policy will be a iterative process

Participant: Although it would be great to get a perfect policy in one go, it is more likely a process of trial and error. So we need to look for a process where we can try things out and provide feedback.
NLnet: Good point. We probably will need adjustments to the policy. We are open to feedback and we also have to learn ourselves. This is a fast-changing field with many moving parts.
Other participant: It is unlikely that you can draft a one-size fits all policy since all projects are different. For instance, an experimental new project will not have a test suite available to it.

Contributing to shaping this space

GenAI and how to deal with it is an evolving field. It would be great if we can help shape how we deal with it. How do we want the tools to work, and provide the possibility to log their contributions to the final code.

Other topics / questions

Two other questions came up during the interactive session

Is logging required retroactively?

No logging is not required retroactively. The policy went into force on December 8, 2025.

Can you provide guidance on AI & copyright?

No, we cannot. It really is the responsibility of the grantee to make sure that the work they submit can be licensed under an open source licence. It has always been the bedrock of NLnet grants that all work must be published under a recognised free and open source licence in its entirety. Ensuring that the work can be licensed has always been the responsibility of the grantee. This remains so.
Someone suggested it would be great if there'd be a collaborative space where people can share their findings about AI & copyright. Keep an overview of the most popular models. Warn each other about models with tricky Terms of Service that claim ownership, etc.