Sekou Owino: Legal issues arising from generative artificial intelligence

News

 

It is unlikely that there is any reader of this column who has never heard of artificial intelligence (AI), the emergent technology of the moment and of the near future. It is more likely that each reader has had an engagement with the AI tool known as ChatGPT (Chat Generative Pre-Trained Transformer).

This is a web-based generative artificial intelligence tool that can compose text, audio and image content from the web and engage in response to questions from human users. It does this by scouring the internet for answers to questions. ChatGPT had reached 100 million users within two months of its public launch.

Thus, the AI tools of this category are now used in and have impacted healthcare, geopolitics, journalism education and even in life sciences. 

AI systems such as Chat GPT are now used by students, for instance, to answer examination questions. 

The concern here is that its impact on education could be about students who obtain results and qualifications that are not rightfully their own. 

ChatGPT, for instance, passed examinations at the University of Pennsylvania’s Wharton School of Business and at the University of Minnesota Law School. 

A poignant question is who the results of ChatGPT-generated answers would belong. Is it the student who submitted the essay or the tool that generated the answers in the essay?

These kinds of ethical and legal issues challenge societies to think about how to regulate the design, deployment and effects of AI tools across the world.

The other issue of concern is that the generative AI tools affect human experiences and interaction. Given that these generative AI tools are but products of the human mind, they have been known to replicate the biases and limitations that humans possess around race, gender and culture. 

Even technology giants have found themselves on the wrong side of this, as seen in the case where a Twitter-based AI system they produced in 2016 began to generate racist content and had to be shut down. 

A news organisation also found itself with potential legal liabilities when its AI news-generating system produced content that constituted breach of copyright of content from other sites and in some cases false stories.

Replicate errors 

The issue is that artificial intelligence may in some instances be misleading. It does not mean that the AI tools are perfect or even objective. They replicate errors in their development, which may have serious consequences, sometimes even of an economic nature. About two months ago, Bard, an AI tool produced by Google, gave a wrong answer to a question and led to serious plummeting of the developer’s market capitalisation of up to US$100 billion.

Because these tools do publish their content on the web, inaccuracies and false statements come with serious ramifications for the reputations of other persons. Just last week, a professor at Georgetown University Law school in the United States was shocked when an AI tool reported that he had been accused of sexual harassment during a class trip sponsored by the school. 

It turned out that this was totally untrue and the professor had never taken such a trip with the students. This kind of invented falsehood could be defamatory and could attract a lawsuit against the developer of the AI tool. The legal conundrum is that there is doubt whether, under US law, a libel claim could arise in this case. Opinion is divided and some law scholars think that the more viable option would be for a product liability claim against the developers of the AI tool for damage arising from a defective product.

Another potential defamation claim against the developer of a generative AI tool will arise from the mayor of a town in Australia who was falsely depicted as part of a bribery scandal. He was in fact the whistleblower. The mayor’s lawyers sent a demand letter seeking correction of the error failing which he would sue for defamation. A suit by the mayor would probably be the first time the developers of an AI tool would have to contend with a suit based on false claims published by their AI tool.

In March 2023, a Chicago law firm sued the owner of an AI tool known as DoNotPay, claiming that it was acting unethically and contrary to the rules of the legal profession. 

Small claims court

DoNotPay, which describes itself as a robot lawyer, uses AI to provide legal services to the public. The range of services includes sending demand letters for clients, job discrimination complaints and representation of clients in a small claims court. The lawsuit filed against DoNotPay was that it was neither a robot, nor a licensed lawyer or holder of a law degree, the basic requirements for practice of law in the United States. For its part, DoNotPay sees this case as one instituted by lawyers so as to protect their turf for work that can be easily done through AI systems. The case has not yet been decided.

Perhaps the area that calls for urgent legal regulation in generative AI systems may be in the ability of their employment to create what in the tech space is known as Deepfakes. 

A Deepfake uses artificial intelligence to make images of fake events or mimicry of sounds that have not actually been uttered by the person whose voice is mimicked. It is the audio-visual step beyond photo-shopping. 

Deepfake systems create images and attach the faces of real persons and their voices to these images to depict them as engaging in events and or activities that are not real. The danger with them is that they often use images of models and women and embed them into pornographic content to imply that they engaged in pornography. The legal challenge with this is that neither the laws of libel, revenge pornography or image rights can adequately address Deepfake work.

In the United States, for example, there is no federal legislation to protect an innocent person against Deepfake technology. A Deepfakes Accountability Act was proposed in 2019 to address this issue. However, most countries are yet to legislate on this area, leave alone the issue of AI systems generally.

In recognition of this, the US Commerce Department of the United States this week took steps towards establishing rules for regulation of Artificial Intelligence tools. The intention is to ensure that AI systems are trustworthy and to protect privacy and meet other consumer safety regulations.

The European Union appears to be the leading frontier on this issue. The Artificial Intelligence Act 2021 was passed in 2021 but has faced headwinds from the expected quarters of industry, that is producers of generative AI systems who may not want to be regulated.

Given that most AI systems are web-based and accessible throughout the world, Kenya’s legislature would do well to consider legislation on this subject.

The truth, however, is that AI tools will need legally established guard-rails that address the real concerns about consequences of AI systems and their use by consumers and effects on others. 

Mr Owino is Head of Legal at Nation Media Group PLC.
   BY DAILY NATION   

Leave a Reply

Your email address will not be published. Required fields are marked *