Connect with us

Business

ChatGPT: The Good, The Bad, The Ethical And Legal Implications

Published

on

The emergence of ChatGPT has received top level critical reviews by AI experts, academics, researchers and most importantly, the public. The potential of having seamless and uninterrupted interaction with a chatbot is as impressive as they, even if it has been a long time coming. What the is ChatGPT, which has received enormous rave reviews as well as critical slander on its special offering, as well as its impact on humans and the society. ChatGPT, officially known as Chat Generative Pre-Trained Transformer, is a much celebrated artificial-intelligence chatbot developed by OpenAI and launched in November 2022. The technology is developed on OpenAI’s GPT-3 species of large language models and fine-tuned using both supervised (which involves the introduction of known input and expected output) and reinforcement learning techniques.

Undoubtedly, ChatGPT is an immensely useful tool, despite some critical positions held by certain section of the masses and AI experts. Notably, it is opined that ChatGPT cannot handle the large number of requests it gets and may often experience downtime, when this happens, ChatGPT informs you in a three Stanza poem with its own rhyme scheme on the login page to “check back soon.” The short, witty poem makes sure to reiterate that “ChatGPT, the future of chat, that’s a fact.” Without referring to the grammatical issues that sentence raises, a statement that will be be examined in depth considering the ethical and legal implications of ChatGPT’s overall features.

ChatGPT has exceptional features that attracted over one million users in its launch week and stiff competition. Google management labelled it as “code red”; this is understandable as many people have tagged ChatGPT as the first real challenge of Google’s spot in web search in years. It’s interesting to note that ChatGPT is built on GPT, a Google-invented AI model known as a transformer. How best can we explore ChatGPT, other than involving some interaction with it in the makeup of this article. I asked the chatbot to define itself, to which it responded that: “CHATGPT is a state-of-the-art AI language model developed by OpenAI. It is a Generative Pre-trained Transformer (GPT) model that uses deep learning techniques to generate human-like text based on the input provided. It has been trained on a massive corpus of diverse internet text and can perform various language tasks such as answering questions, generating creative writing, and translating text”. When asked to highlight its key features, it only listed seven key features, which all relate to text generation. Although mimicking human conversation is, in fact, its main feature, it has other capabilities as well, for instance: It stores previous conversations and recalls where necessary. It can also write and debug computer programs, answer test questions (sometimes at a level higher than the average human test-taker), compose song lyrics, emulate a Linux system, simulate an entire chat room, play games like tic-tac-toe, and simulate an ATM. As impressive as this may sound, these features possess inherent legal and ethical issues arising from the training data used to achieve these features and how users utilise them.

 

LEGAL IMPLICATIONS

The calibre and volume of the data used for training ChatGPT’s primary function affect its performance. While OpenAI has not publicly disclose the amount or origin of the training database used for CHATGPT (as is the case in major AI discoveries of this nature), it is widely juxtaposed that the model was trained on datasets which includes internet content, including webpages, books, essays, and other publicly available text sources. Therefore, ChatGPT tend to answer requests based on its text sources, without credits or reference to the sources of these information. This raises a huge copyright question, particularly in the light of ChatGPT’s widespread usage in academic, content creation and by students in solving academic assessments. The critical question is then, who owns the intellectual property of that GPT content? Is ChatGPT guilty of Copyright infringement? Can you sue ChatGPT? Are the users culpable in case of contents crafted out of ChatGPT?

The copyright laws of most countries vest ownership in the work’s author, and the author of a given work is the person it originated from or who made arrangements to create it. Originality is determined based on the amount of skill, labour, and effort expended to produce the work. Originality does not mean newness; it simply means that the work is not copied from another source. This provision of law makes it difficult for copyright infringement to arise from Generative AI systems like ChaptGPT, as these systems can create original works that are difficult to attribute to a specific creator or source. When this was asked, the chatbot vehemently denies merely copying and pasting content from websites.

Nonetheless, copyright subsists in OpenAI for any text generated in ChatGPT. Hence as the owner and creator (author) of ChatGPT, OpenAI may enforce copyright wherever necessary. OpenAI also has the power to sue and be sued, not ChatGPT, as the chatbot is not a legal person. Additionally, the mode of usage of content clearly determines whether there is copyright infringement or not. For instance, can you use content generated from ChatGPT for commercial purposes and make money from it? To gain clarity on the issue, I asked the chatbot if the content it generates could be used for commercial purposes. It didn’t give a definite answer to my question. Still, it stated that it might be used in accordance with OpenAI’s acceptable use policy in some circumstances, while it may constitute copyright infringement in others. It shifts the responsibility to the users to determine use cases that require a licence from OpenAI or the platform where the text was generated.

Furthermore, ChatGPT also raises cybersecurity concerns, as the tool can be used to compose perfect and wonderfully crafted phony emails, by scammers. The question once more is in respect of the allocation of liability for any legal harm or liability arising from this act. The general opinion is that, while OpenAI is primarily responsible for monitoring the chatbot’s use and taking reasonable measures to prevent fraudulent use by bad actors, the bad actors will always be the first in line to bear liability.

 

Data Privacy

The next class of legal issues that arose as a result of the emergence of ChatGPT relates to data privacy concerns. The key questions here are, are there personal data contained in the database used to train ChatGPT? How does ChatGPT deal with personal data? When prompted, the chatbot admitted that there may be personal data contained in the database used for its training, but clarifies that it doesn’t collect, store or use personal data. OpenAI did not train it to process or access personal data, and users are strongly warned not to provide input that contains personal data or confidential information to the chatbot. OpenAI is compliant with many jurisdictions’ data protection regulations and has a data privacy policy available on its website.

 

Ethical Issues

The popularity of AI-generated photos in 2022 was quickly followed by a wave of criticism from artists, notably illustrators, who were outraged that their work was exploited to train AI without their permission, credit, or remuneration. They claimed that the AI learning model allows the Generative AI to condense their entire life’s work into a few hours and respond in seconds. AI does not require the level of experience, labour, or money that artists and people in general have devoted to mastering their skills.

This sentiment also applies to ChatGPT. The text resources used to train it contains articles such as this which took days to write and perfect, which is then fed to the chatbot to provide responses in a matter of seconds. While it hasn’t yet been monetised and is available for free, it’s just a matter of time before a subscription is required for complete access, like with most generative Ai systems. Perhaps in the future, a method for compensating and acknowledging the sources used to train generative AI will be devised, similar to how Instagram, TikTok, and YouTube do for music. As of now, computer-generated work is protected under copyright laws and does not constitute infringement.

Another ethical concern it raises is plagiarism. Educational institutions are hugely concerned about the originality of work submitted by students. Unfortunately, the features of ChatGPT have compounded the issue by making it easier for students to plagiarise almost undetected. While there are AIs developed with the sole purpose of detecting AI-written content. These tools are not 100% accurate and may often generate false positives when the content is human written.

Conclusively, there are definitely good and not so pleasant opinions about ChatGPT, with enormous critical and positive reviews since its emergence. However, opinions and perspectives have utmostly depended so entirely on the user and how they choose to use the features. With continuously engagements with the chatbot, there are definitely several legal and ethical implications and other philosophical questions that will arise as time passes. As a matter of fact, it’s too early to identify all the legal and ethical implications contained in the AI tool as more issues will only emerge with use and time. Yet, despite the negativities, we must admit that this is a very useful and handy tool. Meanwhile, it isn’t yet at its peak, and OpenAI openly exposes its limitations by admitting that the chatbot’s responses aren’t always ideal. In order to grow, it allows users to upvote or downvote responses and regenerate responses that aren’t relevant.

 

 

Moshood Yahaya

Founder, Applied AI Society, Bradford

Click to comment

Leave a Reply

Trending