Content and expertise contributed in partnership with Sara Holland, Dave Holt and Mark Nichols at Potter Clarkson
Protecting your IP when using AI tools in your R&D process
If you're using ChatGPT, GitHub Copilot, or AI platforms in your R&D, you have IP questions that didn't exist three years ago. The decisions you make now will affect your IP position for years.
If you're a deep tech founder using ChatGPT to draft technical specifications, GitHub Copilot to write production code, or an AI platform to screen molecular candidates, you have IP questions that didn't exist three years ago. Most companies are ignoring them. That's a risk — not because the sky is falling, but because the decisions you make now about how AI fits into your R&D process will affect your IP position for years.
1. AI-generated outputs and your patent strategy
Under current UK patent law, an invention must have a human inventor. The UK IPO and the courts have consistently held that AI systems cannot be named as inventors.
While there is still a human in the loop, there is likely to be an appropriate person to name as inventor, allowing for patentability. That said, there may be complexity as to who the correct person to name as inventor is — the person who developed the AI model, the person using the AI model, the person selecting results out of the AI model, or somebody else.
Longer term this may have implications for patentability — if there is no human in the loop, and can therefore be no inventor, an output cannot currently be patented. Further, if an "invention" is AI-generated, could it be argued to be obvious and therefore not patentable?
At present, we are not seeing this come through to patent examination so much — it is still possible for example to protect an enzyme that has been designed using computational methods. Often there is genuinely a good deal of human input up front, and then downstream in the testing and confirmation of the designed outputs.
It is also not a requirement to include in a patent application how something was designed. For example we can just describe and protect the engineered enzyme and not mention how it was designed. On the other hand, there are often many "designs" that the tool outputs that in practice either don't work, or there is one stand-out candidate. This "negative" data can be useful when arguing for inventive step — it might have been obvious to have a go, but it was not obvious that this one candidate would be so much better.
The practical implication: if you're using AI tools in your inventive process, you need to document the human intellectual contribution at each stage. Always frame your innovation process as using AI as a tool; the human is the inventor.
It is also worth noting that use of AI has copyright implications. Computer code, for example, is usually protected by copyright, many jurisdictions have legislated to make clear that an AI output does not attract copyright protection. The UK government has recently indicated that it is likely to follow suit. This means that where an AI generates code, that code will not be protected by copyright. Again, involving a human in the development of that code will reduce this risk — provided code has original human input, even if heavily assisted by an AI, it is likely to be protected.
2. What goes into an AI tool may not stay private
When you input proprietary data, code, or technical descriptions into a third-party AI platform, you need to understand what happens to that data. Many AI platforms retain input data for model training unless you've specifically opted out or negotiated enterprise terms. If your confidential technical information enters a training dataset, your trade secret protection may be compromised — not through malice, but through the default terms of service you clicked through without reading. This is particularly relevant for companies pursuing trade-secret-based IP strategies.
This is also relevant for companies that look to partner/collaborate with other companies to design products using your own proprietary algorithms. If company A lets you use their data to design them a product, you need to be clear with company A as to what is going to happen to their data. From your perspective the more data the better and you may want to combine it with your own dataset to improve the underlying algorithm. Company A may not want you to do this.
3. The grant dimension
If your AI-assisted R&D is funded by public money, the IP questions get more specific. Grant funders expect you to own or control the IP generated by the project. If your R&D process involves AI tools whose terms of service claim rights over outputs generated using their platform, you may have a conflict between your grant obligations and your AI tool licence. Most current AI platforms (including OpenAI's enterprise products and GitHub Copilot) assign output rights to the user — but this varies by platform and by pricing tier. Check the terms for the specific tools you're using, not the general assumption.
4. Practical steps that protect you without slowing you down
You don't need to stop using AI tools. You need a lightweight protocol: First, use enterprise or API tiers of AI tools rather than consumer tiers — these typically have stronger data protection and clearer IP assignment terms. Second, maintain an invention disclosure log that records, for each significant R&D output, what the human contribution was and what role AI tools played. Third, review the terms of service for every AI platform you use in your R&D workflow — specifically the data retention and output ownership clauses. Fourth, ensure that confidential information isn't being entered into AI tools that retain training data.
5. The landscape is moving fast and the law hasn't caught up
The UK government's consultation on AI and IP (2022–23) left many questions deliberately open, and their recent report in March 2026 has rowed back from a number of positions previously put forward. The EU AI Act addresses risk classification and transparency but doesn't resolve the inventorship question. Case law is developing but inconsistent across jurisdictions. What this means practically: the companies that will be in the strongest IP position in 2–3 years are the ones that are documenting their human contribution to AI-assisted inventions now, maintaining clean data hygiene around their AI tool usage, and building IP strategies that work regardless of how the legal landscape evolves.
This isn't a reason to panic — it's a reason to be deliberate. A short conversation with your IP advisor about how AI tools fit into your R&D workflow is one of the most valuable hours you'll spend this quarter.