How safe is AI?
As our world becomes more and more technology heavy, from automation taking jobs, to students using ChatGPT in school, AI is the elephant in the room. In recent decades, one common criticism of AI is the exploitation of unlawfully obtained data from its users. However, recently this criticism has been fueled further by a prominent lawsuit.
On September 6th, Open AI, the creator of ChatGPT, and Microsoft (Chat GPT’s main supporter and funder) were faced with a damaging class action lawsuit. The lawsuit, contrary to common expectation, was not filed by civilians, but rather, software engineers who use ChatGPT, who allegedly accused the company of training their AI technology through stolen personal information from thousands upon thousands of internet users.
A lawsuit with damning accusations is devastating to the public perception of not only ChatGPT but all present and future AI models as well. However, this case is further damaging to the company for two reasons. Firstly, the lawsuit is represented by none other than the giant law firm : Morgan & Morgan. Secondly, this is not the first case. In fact, a similar case, if not almost identical to the complaint filed to Open AI in June. In fact it has countless pages that are repeated verbatim from the earlier case.
The exact accusation surrounds OpenAI using the private data of millions, especially children, without their consent. What’s more pressing is the accusation that this information is going to build the pedestal and drive the future AI products like ChatGPT-3.5, ChatGPT-4.0, and Dall-E, etc. This case also brings a fact previously concealed to the public; since augmenting its data collection in 2019, OpenAI has created automated bots called “crawlers” whose function is to scrape and collect information from the web.
To fully understand the intricacies, one must comprehend open source code. Chat- GPT uses thousands of contributors to develop and access open- course code, using Redis, Crawlers, etc. This makes it extremely hard to hold certain people accountable and opens up vulnerable sources within the information being uploaded. These malicious actors prey on those mistakes which is why open- source libraries have increased by an astonishing 742% since 2019.
However, some push this case to the side as they assume it is a case of revenge and fear of job loss. The two engineers who brought the lawsuit have claimed that they are especially concerned with the fact that the high progression of ChatGPT and its engineering could make their “skills and expertise” obsolete, leading to their “progressional obsolescence.” Others simply turn a blind eye, considering the vast amounts of benefits ChatGPT accumulates within their life, yet the urgency of this issue is paramount especially considering the data that is used against consent. These tangible harms are felt by the lives of everyday people, ranging from the pilfering of medical records, to private conversation. To put in perspective the scope of the issue, the malicious and unrestrained hands of OpenAI are able to reach every piece of information/data that has EVER been exchanged on the internet could be potentially taken, all without the discretion of users. While the children remain among the most vulnerable, this issue expands far beyond the ipads and iphones of our innocent and ignorant children and teenagers, but is instead a vast issue that affects everyone who has ever touched an electronic device!
The future of AI is still debated; Proponents and opponents of AI argue repeatedly without many answers. Yet, this law might shift the stance of many towards the AI. If this case does not create the impact, the decision of the lawsuit will definitely deliver a powerful blow to the losing side.
​
​
​
​
​
​
​
​
​
​
​
​
​
​
