GGU Law Review Blog

ChatGPT and Your Data Privacy

By now, you have either used ChatGPT or at least have heard of it. This AI chatbot is a tool that has taken AI to the next level in understanding the complex nature of the human language. This tool goes beyond what a quick Google search can do but rather it can read and understand the depth of your questions and respond with a level of knowledge that would be equivalent to speaking to an informed person on the subject. But ever since ChatGPT was first released in 2018 by OpenAI, it has evolved and transformed the professional workspace at a speed and impact never before seen. On the flip side, this sudden technological transformation has also quickly raised a lot of concerns on a variety of topics including some legal implications such as IP related issues with regard to copyright and privacy law.

What is ChatGPT and how does it work?

Before we can truly understand why there’s a huge privacy debate over the use of ChatGPT, we have to understand the basics of what makes up ChatGPT. At the very core of ChatGPT is what is called a Transformer. In short, you input a sequence of text and it’s able to understand that sequence and transform it within its engine to generate a response back to you. You input something and it outputs something. Sounds simple right? But what makes AI mind boggling is the amount of data that is needed to train the AI to give back responses that a human can expect from another human. This phase of the process is called the training phase. The training phase is where the transformer is being fed endless amounts of questions, phrases, etc. so that it can learn to identify different input types and correlate them to a response from its database. Over time, it learns to adjust the stored responses so it can output a response adjusted to your specific input.

Now, after amassing all of this data, these AI models such as ChatGPT can perform as we see them do today while in parallel still learning as more and more people use them By this point you may have noticed the glaring issue that comes to mind  regarding AI models and privacy. In order for many of these AI models to be useful, it needs real data; your data.

Data Privacy and ChatGPT

Data privacy is one of the biggest challenges these AI models such as ChatGPT are facing. Many companies such as Apple, Amazon and many others have gone to the extent to outright ban the use of these AI tools with their employees. These bans all originate from the same key concern, data leaks.

Software Engineers at Qualcomm on the other hand are highly encouraged to use AI. This encouragement has even come from their CEO, Cristiano Amon. At a partner event in 2023 that I attended on behalf of ADLINK Technologies, Cristiano closed the event out by discussing AI and it’s future. Cristiano backed the evolution of AI and directed his employees to use ChatGPT and other AI tools to “enhance their workflows” and “automate the mundane” tasks. Cristano made it clear that he believes those who don’t embrace AI will fall behind and that AI tools should be used to make the routine tasks we all do every day more efficient to accomplish to reserve one’s mental stamina for more complex tasks that AI can’t accomplish.

However, the usage of these AI models requires data. In Apple’s case, they fear that their employees may enter highly confidential code into the AI models, thus making a potentially protected algorithm, not so protected anymore. When these AI tools like ChatGPT take in information from the users, they store it not only for that same users to re-use and transform, but also for others.

ChatGPT Bans Around the World and Misinformation

Globally, the apprehension surrounding the use of AI models has led some nations to implement laws prohibiting their use altogether until regulations are put into place.

In early 2023 Italy had banned ChatGPT country wide due to concerns of privacy and inaccurate information being spread to children by the AI. If you have used the tool before, you may have noticed at times  that a response to your question wasn’t accurate because you knew the answer and checked if ChatGPT knew it too.

Italian regulators have notified OpenAI about ChatGPT’s breach of EU data privacy rules, initiating an investigation last year after temporarily banning the chatbot within Italy. The investigation revealed violations of the General Data Protection Regulation (GDPR), including exposure of users’ messages and payment information, lack of age verification, and concerns over data collection practices and potential dissemination of false information. OpenAI claims alignment with GDPR and privacy laws, stating efforts to minimize personal data in training ChatGPT. This incident highlights growing regulatory scrutiny globally over AI systems, with the U.S. Federal Trade Commission and EU competition regulators investigating AI startups like OpenAI and their ties with tech giants. Additionally, on March 13, 2024 EU signed into law the AI Act, the world’s first comprehensive regulation for artificial intelligence, signaling broader oversight in the AI sector.

On April 29, 2023 Italy lifted it’s ban of ChatGPT. This came after Microsoft began backing OpenAI’s ChatGPT and released new features within ChatGPT to be more transparent on how the data is being utilized and stored. These new transparency features include warnings if the tools does not believe the answer it provided was 100 percent accurate. We can take for instance Steven Schwartz, a NY lawyer who unfortunately got media attention when the Federal Judge called him out for using fake cases in his argument. Some rely on Chatgpt blindly without checking to see if the assistant got it right.

The future of ChatGPT and legal challenges to it

AI models like ChatGPT appear to be here for the foreseeable future and come with a lot of benefits to make the mundane tasks more efficient. However, they come with the risk that if we enter in sensitive data, it could be open for others to pull for themselves. The question in the legal realm is how do we regulate this? Do we use it at our own risk knowing this is an open platform? Do we police ourselves and others within our organizations on the usage of it? An active class action against Microsoft and ChatGPT has been initiated by personal-injury firm giant Morgan & Morgan to hold them “accountable for the mass theft of personal information and violations of privacy, property, and consumer rights”. Additionally, there are other lawsuits against Microsoft and ChatGPT for copyright violations for “scraping” personal data and copyrighted material to train their AI models. Take for instance the mass amount of new AI generated videos as example.

The journey into the world of ChatGPT unveils not only the technological marvels it brings but also the ethical and legal challenges that accompany its widespread use. As the technology matures, so too will the discussions surrounding its implications on privacy and data protection. The coming months and years promise an intriguing exploration of the evolving legal landscape, providing insights into how society grapples with the integration of advanced AI tools into our daily lives.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Golden Gate U. L. Rev.

Subscribe now to keep reading and get access to the full archive.

Continue reading