File Copyright Online - File mutual Divorce in Delhi - Online Legal Advice - Lawyers in India

The Legal Conundrum Around The Ai-Powered Bot: ChatGPT

There have been multiple attempts to use movies, and documentaries to depict the power and usage of Artificial Intelligence in the future. But now we seem much closer to reality. ChatGPT took the internet by storm when it racked up a million followers within five days of its launch.

The search-engine industry has been dominated by Google for the past two decades however, the tech giant might face some serious competition for the first time since its inception. The first such interactive bot name 'ELIZA' was first successfully tested at the Massachusetts Institute of Technology. Talking about the present day, the user count of ChatGTP touched the million marks within mere five days of its launch and the growth has been consistent with almost 2 million active users.

This rapid adoption can be attributed to its ability to save time and streamline various tasks. Microsoft recently confirmed a huge acquisition of Open AI, the creators of Chat GPT. The investment is a testament to the possibility of the technology, and we will see even more use cases for Chat GPT shortly, highlighted by Bill Gates. As we transition into 2023, this bot is the perfect solution for all those who do not care about going into intricate details of the subject-matter and want a helicopter view of any given topic and poses a potential threat to the employment of white-collar workers in multiple fields.

The paper attempts to analyse the loopholes in the present laws governing such activities over the internet and further suggests measures to improve the governance of activities taking place over the internet.

ChatGTP; An Overview

ChatGTP stands for Chat Generative Pre-Trained Transformer, it is a complex machine learning bot that has been trained to interact with the users and carry out natural language generation tasks. At present, the bot is fed and trained with all the information up to 2021. It was founded by a company that primarily conducts artificial intelligence-related research and develops various useful bots and programs that benefit humanity named OpenAI was founded by Elon Musk and Sam Altman. ChatGTP can carry out various tasks like; answering questions, completing a given text or phrase, writing fictional and non-fictional content, generating human-like responses, summarizing texts, generating complex computer codes etc.

How is ChatGTP trained?

The bot was developed and trained using semi-supervised learning. Data Scientists and machine learning engineers used a set of learning algorithms some of which are partially labelled and some of them not. The bot uses collates all the labelled data, processes the same, and using the output tries to recognize and predict the output of the unlabelled data.

OpenAI workers say that they collected huge amounts of unlabelled data over the internet and supplemented those with the text sources available in the public domain. It has been speculated that they outsourced the labelling work to other companies because labelling of such an enormous pile of data is impossible for a company. The bot has been trained with the purpose of holding conversations with the users.

Google has dominated the 'Search Engine' industry right from its inception but ChatGTP seems to be the first one to disrupt Google's dominance, however when we look into the functioning of the two we find that the former indexes webpages and helps the users find information by displaying the relevant web pages according to the search input, the latter on the other hand primarily aims to converse with the users on topic of their choice and the same works through loads of labelled data that has been fed into the bot and thus the same is not that reliable and updated as compared to the Google search engine.

However, looking at the present trend and user base the ChatGPT bot will only get better and with an increasing user base the search trends and methods would become complex and precise and the bot will also update and adapt itself accordingly making it a better and more immersive experience for the users.

The biggest strength of this bot is the human-like response and precise nature of the same allowing the users to maximize their output, apart from this the bot also helps you identify the flaws and bugs in your literary and coding work. However, despite its unique and revolutionizing interface there are some major flaws associated with ChatGTP like accuracy issues, limitation of data, no access to the internet, no real time updates and lastly ChatGTP won't be free forever, at some point of time OpenAI would monetize this interactive bot.

After analysing the present usage experience of the users, ChatGPT has enormous potential as a program and at the same time, it is a subtle yet minute version of what Artificial Intelligence would look like in the future. The human brain is slower at computing and doing other local tasks but is much superior when it comes to complex exercises like analysing things and reaching a conclusion. The present version of ChatGTP gives a disclaimer regarding its usage and OpenAI has made efforts to ensure the use of the platform for illegal activities.

However, the human brain is smart enough to manipulate the software using complex input language and eventually get the information they need. If supposedly an individual asks the bot about committing some harm to another person the software won't directly answer that question but the user would be able to get the information by leading the conversation in some other way and eventually obtaining what he/she wants. If the same is used in real life to inflict harm, who will be liable remains a concern.

Cybercrime Issues

Cybercrime is a type of criminal activity that involves a computer, networked device, or a network for the purpose of spreading illegal information, spread malware and ransomware attacks, internet fraud, stealing financial information etc. For cyber criminals, ChatGTP has become a useful tool by aiding them in writing malware and ransomware codes with much ease and efficiency. Today, 4.9 billion people have access to the internet and people are using the same for multiple activities.

The access of Internet is no more a major problem rather internet and cyber supremacy are the two most essential attributes one needs to possess as an individual, organization, and country to succeed in their respective matters. Recently, in a study by two CyberArk researchers Eran Shimony and Omer Tsarfati they discussed the bot can be used as a perfect tool to generate highly evasive malware known as polymorphic malware. The existing guidelines and settings of the bot ensure that the tool won't create malicious software but has the potential to accelerate the process for those who have knowledge of such code.

 Once a person is able to get the results, they are looking for they will be able to generate error-free codes for malicious activities over the web. Several reports have surfaced that the active players of the Dark Web have been testing ChatGTP for generating codes and programs like Infostelaer and creating encryption tools. A recent actor of the dark web uploaded pictures showing how he developed an infostealer program that identifies files of interest, copies them using the malware, and sends them over to the web in the form of a zip file and the same is not protected making it available for third-party usage.

Such programs can lead to serious implications and breaches relating to data privacy and security. Another actor on the darknet created a Java program and C++ program that attempts to phish the financial credentials of any users. Such usage of the bot indicates how individuals who are not technically sound and talented will be able to generate codes and initiate cyber-attacks for an ulterior motive.

If supposedly an anonymous user sitting in Sweden using the servers of Pakistan generates the code with the help of ChatGTP and creates software that is used for cyber-attacks at an organization situated in Spain, then such activities raise questions about the jurisdiction of such cybercrime activities.

Also, if a user is successful in manipulating the bot and getting the answer, he/she needs, will the developers of OpenAI be liable for the activities of the person in real life? because here the bot is aiding/abetting the crime by providing pivotal information to the user, secondly, if the answer to the former question is yes, to what extent will the developers be liable? At present, there are no laws governing such issues. The very first solution to this problem one can think of is ensuring that the developers should ensure that the AI-powered bot is computed and trained in such a manner that it becomes impossible for the users to access such dangerous information.

However, this might also influence other functionalities of the bot and it is very difficult to define what all activities are classified as 'Legal' Or 'Illegal.' Also, if the developers can determine and devise a method to classify these activities under different headings, this will surely affect the functioning of the bot as a large chunk of information would be needed to be taken down thus limiting the information available over the bot and ultimately defeating the purpose of the bot in entirety.

Thus, as a suggestion the government authorities need to work in close association with the developers of such platforms and determine the extent of liability, the parties would be responsible. Secondly, the government also needs to regulate dynamic legislation keeping in mind the percentage of internet users, and future use of technology and artificial intelligence to ensure that all the parties involved in such illicit activities and punished accordingly.

At present the developers of the ChatGTP are working extensively on making the defensive side of the program much stronger and robust so that the attackers are prevented from misusing the bot for various reason. As mentioned above it is very difficult to identify which code is being for what purpose and it also becomes difficult to identify whether that code is generated by the bot or written by a person himself as content available over ChatGTP can be conveniently copied and used at the desire of the user. In order to tackle this problem, the founders of the AI-bot are thinking of putting a "watermark" that is not erasable and this would help in an easy distinction of work that is produced by the AI-powered bot and original work.

However, the developers need to be very careful while determining the style of the watermark because today with the help of editing tools any document is easily editable, thus such a watermark design needs to be adopted that is evident and at the same time difficult to remove or erase.

Some Other Issues And Threats;
The ChatGTP has been racking up users at a ridiculous rate and seems to be the new power tool that has answers to all the questions of users over the internet. Yet at the same time, a lot of people have pointed out the serious pitfalls the bot has. Firstly, ChatGTP is a bot that has been trained with information up to 2020 thus if someone is looking for information beyond that point the user would fail in fetching the desirable output.

Secondly, as mentioned above the bot uses a tokenized/labelled piece of information to determine the answers to the queries entered by the user. It basically makes a series of guesses for arriving to its final answer and thus there is a high possibility that it can argue wrong answers as if they were completely true. These kinds of issues can escalate if a person relies on the bot for medical advice and ends up harming himself/herself because of the wrong output.

Similarly, the bot is fed with collective writing across the world. There is a high possibility that the answers produced by the bot are similar in nature to what it has been fed with, thus it might come out with answers that are biased toward a particular religion, or on topics like feminism, discrimination against women etc. Another issue with the use of this platform is the spreading of fake information and social media swindles.

The presentation of a ChatGTP output is such that it makes fake or unreliable data completely convincing, thus if some item structured over the bot is put out on social media there are high chances that users would be baited to engage with such posts and might end up giving sensitive information or data.

Thirdly, ChatGTP has become a menace for high-school authorities and even universities as the bots can generate English articles and papers within minutes on any given topic. A recent study by a team of professors found out that the bot can generate much better answers and articles as compared to students and if assessed as an individual student it would score higher marks than 70% of the students.

Students have been using the bot for writing assignments and this is hampering their overall education as there is no originality, rationale, creativity, and most importantly effort in any of the work submitted by them. The bot has taken the education world by storm and universities in India, France, and some other European countries have issued notices against the use of ChatGTP in academic assignments.

A possible solution to this might be the development of exclusive software built either by antivirus companies, ed-tech companies, or similar companies working in association/ partnership with universities and schools that would help in determining what content has been copied or generated using the bot and what is original. However, this might be a time taking process and this would require changes in the terms and conditions of usage and data storage policy of ChatGTP.

At present, the bot is still being developed, and OpenAI is working on making it much smarter, faster, and more robust at the same time. The powers possessed by the bot are massive and make one wonder what it would look like in the future at its full capacity. Not today, but someday in the future, such bots will surely end up replacing humans in many jobs in different fields.

Companies hiring content writers and creative writers might refrain from hiring individuals when similar, rather better content would be available over the web at a much faster rate with better accuracy in terms of grammar, punctuation, and information.

The invention of such bots has attracted many investors who are keen on exploring the future. In the past few years, an exorbitant amount of money has been invested in the artificial intelligence industry giving birth to the term 'Money Laundering in the Cloud.' The computer industry depends on a bed of lucrative computing infrastructure used to build programs, write codes, and store data. Each passing day these costs are unwieldy and investments are being sought to keep up the operations.

Recently OpenAI received a mammoth $10 Billion investment commitment from Microsoft and the deal was termed as 'Multi-Billion, Multi-year'. However, looking at the industry standards the cost for developing such bots in $2 billion and these figures have raised questions regarding the transparency of potential deal between Microsoft and OpenAI and has left many wondering about the potential use of such deals in future for financial mirroring.

Conclusion
It is impossible to put a halt on the development of such bots because of the number of people using it, its future capabilities and the amount of money involved. Thus, identifying the risks mentioned and formulating a dynamic set of regulations would be beneficial for all those involved in this multi-billion-dollar arena that would continue to grow exponentially in the coming years.

By studying and analysing the risks mentioned in the abovementioned core we look at the multiple risks associated with the present version of ChatGTP and classifying the data over the web would defeat the purpose of the founders in entirety and thus stipulates the need for a better system where both the users and developers share the liability to a certain extent and at the same the perpetrator is tracked down and punished accordingly.

At present, it is very evident looking at the terms and conditions of the use of ChatGTP and laws governing the internet space, there is an immediate need for significant changes in the existing laws or the introduction of a new set of forward-looking dynamic legislations as the issues associated with this platform are new and different in nature.

Proper terms and conditions of use and liability, the establishment of a robust system of cross-border activities, check on the usage of ChatGTP for academic practices and assignments and devising methods to effectively differentiate between the content generated by the bot and humans are some of the issues that need to be addressed immediately.

Thus, legislation addressing the problems mentioned in the paper must be introduced timely. Alternatively, as a suggestion, a global body comprising of tech experts and cybercrime experts, and money-laundering experts can be constituted to contemplate the present issues and help in the regulation of the laws that would gain confidence among the people at large which the present system seems to be failing in doing so.

By studying the present issues and laws and interpreting the above-mentioned core it is evident that a lot needs to be done on multiple fronts to ensure smooth usage of such bots in the future and at the same time ensure the AI-powered world in governed properly without any major hiccups or loopholes that would pose as a potential threat to the very existence of such interactive artificial intelligence bots.

Law Article in India

You May Like

Lawyers in India - Search By City

Copyright Filing
Online Copyright Registration


LawArticles

How To File For Mutual Divorce In Delhi

Titile

How To File For Mutual Divorce In Delhi Mutual Consent Divorce is the Simplest Way to Obtain a D...

Increased Age For Girls Marriage

Titile

It is hoped that the Prohibition of Child Marriage (Amendment) Bill, 2021, which intends to inc...

Facade of Social Media

Titile

One may very easily get absorbed in the lives of others as one scrolls through a Facebook news ...

Section 482 CrPc - Quashing Of FIR: Guid...

Titile

The Inherent power under Section 482 in The Code Of Criminal Procedure, 1973 (37th Chapter of t...

The Uniform Civil Code (UCC) in India: A...

Titile

The Uniform Civil Code (UCC) is a concept that proposes the unification of personal laws across...

Role Of Artificial Intelligence In Legal...

Titile

Artificial intelligence (AI) is revolutionizing various sectors of the economy, and the legal i...

Lawyers Registration
Lawyers Membership - Get Clients Online


File caveat In Supreme Court Instantly