Removing the licence and/or authors from a FOSS project would generally be a violation of the licensing terms. The tool(s) you use don't change the legal principles.
Of course, the big AI companies blithely ignore moral and legal issues.
> Can LLMs like Gemini, ChatGPT or Claude be used to generate an equivalent FOSS project but removed from its licence and authorship?
No.
The whole project (and some may argue that the LLM that trained on the AGPL code that is also running on the backend), should be open sourced as well.
Using LLMs to remove the licence and generating a derived project from the original AGPL code is not a 'clean room implementation' and is the equivalent of rewriting existing code from the original author.
In a sane world I would have agreed but in the US at least I am not certain this is still true: In Bartz v. Anthropic, Judge Alsup expressed his views that the work of an LLM is equivalent to the one of a person, see around page 12 where he argues that human recalling things from memory and AI inference are effectively the same from a legal perspective
Removing the licence and/or authors from a FOSS project would generally be a violation of the licensing terms. The tool(s) you use don't change the legal principles.
Of course, the big AI companies blithely ignore moral and legal issues.
> Can LLMs like Gemini, ChatGPT or Claude be used to generate an equivalent FOSS project but removed from its licence and authorship?
No.
The whole project (and some may argue that the LLM that trained on the AGPL code that is also running on the backend), should be open sourced as well.
Using LLMs to remove the licence and generating a derived project from the original AGPL code is not a 'clean room implementation' and is the equivalent of rewriting existing code from the original author.
In a sane world I would have agreed but in the US at least I am not certain this is still true: In Bartz v. Anthropic, Judge Alsup expressed his views that the work of an LLM is equivalent to the one of a person, see around page 12 where he argues that human recalling things from memory and AI inference are effectively the same from a legal perspective
https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/...
To me this makes the clean-room distinction very hard to assert, what am I missing?