PROTECTING CREATIVE OWNERSHIP IN THE AGE OF GENERATIVE AI
Table of Contents: I. Abstract. II. Introduction. III. Terms and Conditions Analysis IV. Recommended Guidelines V. Conclusion.
I. Abstract
In the lasts years, Large Language Models (LLMs) have been employed massively in creative areas, supporting founders in the creation of new products, software and services. The main purpose of this paper is to have an understanding of the risks involved in using generative AI in the development of strategic intellectual property in order to use LLMs in a responsible way. To achieve this paper analyses the terms and conditions of ChatGPT, DeepSeek and Mistral in their free versions. The results of this analysis suggest that not understanding the terms and conditions, using the LLMs in their free versions (without opt-out options integrated) and not documenting Inputs that involve registrable information can lead to irremediable future damages to the value of founders’ IP. Overall, this study contributes to protecting the founders’ IP by providing some guidelines that can be followed by organizations to mitigate future legal actions and loss of Intellectual property (IP) rights by the use of generative AI.
II. Introduction
With the rise of Artificial Intelligence (AI), the use of Large Language Models (LLMs) has become massively adopted by entrepreneurs and content creators for creating their logos, marketing campaigns, and— more importantly—for developing their products. While LLMs have brought innovation, lower costs, and an increase in productivity for their users, they have also blurred the boundaries between human and machine-generated work. The following analysis looks at the risks that exist for startup founders and developers in using LLMs for the creation of software, products, and overall, their proprietary information. In order to start this analysis, it must be clarified that this information is only relevant to free accounts in ChatGPT, DeepSeek, and Mistral. It is also especially important to distinguish between Inputs and Outputs in these tools. The former are the so-called prompts users enter into the LLMs, while the latter refers to the result generated by the LLM. Since this distinction has been clarified, three different points can be analyzed: first, the lack of protective laws and terms and conditions applicable for LLMs; second, the risks of disclosing Intellectual Property (IP) in the Inputs; and third, the risks of using the Outputs generated by the AI System. Lastly, guidelines for the correct use of LLMs will be recommended.
III. Terms And Conditions Analysis
As previously mentioned, firstly, in Mexico and in the United States, there are no specific laws that protect users of LLMs regarding the ownership of their Inputs and Outputs. As a result, users of these tools need to understand the terms and conditions of each provider in order to mitigate risks by identifying how they manage IP, data, and Outputs. It is essential to understand the terms and conditions of the LLMs to mitigate legal consequences as will be outlined in the following situations.
Secondly, it is important to know what happens to the Inputs in LLMs. Depending on the company owning the LLM, there are different terms and conditions that may arise and it is crucial to understand their key differences in order to protect the founders’ content. Depending on the LLM, some are automatically granted a perpetual license for every Input their users share on their platform. This means that there are various risks from sharing any key information. If the IP information was not legally registered before introducing it as an Input into the LLM, then it can be difficult to register it later on, because then that information will have become public and the LLM can share it to any other of its users in the future without having to pay any damages for the reproduction of those Inputs.
Thirdly, in relation to the results produced by LLMs, Outputs do not get any IP protection because the law only protects IP created by humans. This means that content produced entirely by AI tools is not entitled to authorship rights, preventing the companies using this content from registering and protecting it. Not only that, but using LLM’s outputs can also be risky and can have legal consequences because the Outputs can infringe the IP of third parties.
Specifically, ChatGPT establishes in their terms and conditions (TCs) that users are—at all moments—owners of their Inputs and Outputs (Content), but states that it can use said Content for developing, maintaining, and improving their services. In other words, if TCs are accepted, ChatGPT is granted the right to use users’ data for almost all general purposes. In a similar fashion, DeepSeek’s TCs do not bestow ownership of the Content, but it has the right to use it to operate, develop, or improve their services. Lastly, Mistral asks for a perpetual license to maintain and optimize Mistral’s AI Products. As explained in the introduction, this applies to free accounts, while in all three cases, having a business premium account gives its users the right to opt-out their data from the AI data training system.
Furthermore, if the LLM is allowed to employ users’ Content at any time, this can create potential risks in the case that it discloses confidential information and reproduces similar or identical data to other users’. Subsequently, the use of these LLMs can potentially damage third parties when creating data that infringes copyright, but these LLM companies impose liability limitations in their TCs to their users. For instance, in the event a claim or legal action arises in the use of AI-generated content, OpenAI and DeepSeek establish that, as permitted by the law, they will not indemnify their users in any way. In the case of Mistral, contrary to the other LLMs, it will indemnify for any damages stemming from any third-party claim asserting that the Mistral AI Products breach or misappropriate any intellectual property right of a third party.
IV. Recommended Guidelines
As a result of the legal vulnerabilities related to the use of LLMs, it is important to follow some guidelines while using them in order to not get behind competitors in terms of productivity and innovation, and at the same time, prevent future lawsuits and the impossibility of registering IP or giving perpetual licenses to AI companies in order to use vital information. Following are some guidelines:
1. Explain the risks of Input to all employees and, if possible, buy LLM’s licenses that prevent them from using the Input in their data-training system.
2. Verify licenses and terms and conditions before using any LLM.
3. Implement traceability measures in order to have proof of the way the Outcomes were created and to be able to justify to what extent the IP was created with human intervention.
4. Ensure that employees are using the LLM provided by the company, which protects the startup’s IP.
V. Conclusion
LLMs offer a great advantage in terms of getting new solutions and content and can be a great tool in almost any business. However, the indiscriminate use of these systems without any preventive measures in place can be problematic, making their users liable to IP lawsuits and not able to protect their IP if it was developed with the help of AI. To reiterate, these issues affect users of free accounts. In general, premium accounts have more protections for their users, but a review of terms and conditions is always advisable. Following the guidelines shared can help mitigate these risks, especially for developers and content creators. Protecting IP in this new environment needs to be seen as a vital part in the process of creating new products and services. Understanding and using preventive measures leads to standing out from the competition by being able to attract more investments for your startup. More importantly, being able to guarantee that your intellectual property remains yours and can be registered by you in the future will prove to be invaluable.
VI. References
1.DeepSeek. DeepSeek Terms of Use. [Online].; 2025 [cited 2025 11 08. Available from:
https://cdn.deepseek.com/policies/en-US/deepseek-terms-of-use.html.
2.MistralAI. Legal terms and conditions. [Online].; 2025 [cited 2025 11 08. Available from:
https://mistral.ai/terms#terms-of-service.
3.OpenAI. Terms of Use. [Online].; 2024 [cited 2025 11 08. Available from:
https://openai.com/policies/row-terms-of-use/.
x