AI legal challenges become more complex
Gaps are emerging in the willingness of countries to accept the generative AI juggernauts such as ChatGPT, with concerns that privacy, security, intellectual property and personal rights are being put at risk by the open-source software.
If ChatGPT were a country, it would now be bigger than China.
The AI platform that set internet records from launch — a million users in the first five days; 100 million within two months — is estimated to have had more than 1.8 billion visitors in April according to website traffic platform Similarweb.
That’s nearly twice the traffic the site received in February and means the OpenAI website now ranks above Zoom, Netflix or search engines Bing and DuckDuckGo.
It’s yet another sign of the unstoppable spread of interest in generative AI since OpenAI launched ChatGPT at the end of November, and with the pending roll-out of OpenAI integration in Microsoft products, growth is set to accelerate even more.
For businesses, it is a case of coming, ready or not, according to Baker Tilly experts, who stress that despite the willingness of many leaders to learn about and test AI software on the web, few are really ready for the changes about to occur.
Baker Tilly Canada Partner in Digital Technology & Risk Advisory Deepak Upadhyaya says many clients are just beginning to come to terms with how large language models like ChatGPT and its newer version, GPT-4, can help in their businesses.
For most business users, access to GPT tools comes via the OpenAI website, either accessing the legacy ChatGPT tools or paying a subscription to access GPT-4.
In both cases, content on these platforms created by these conversations can be used by OpenAI to help train its models and improve the service, according to OpenAI terms and conditions.
That raises risks around entering content that might be commercially sensitive, which could resurface elsewhere at a later time.
“There are lots of questions clients are still trying to resolve, like how they might set protocols in place for use or how they might ensure they are using the right tool in the right way,” he says.
“If they are using the information that is generated, how reliable is it? How much can they trust the response?
“But we are also seeing hesitation over big questions regarding governance — how much internal data they are willing to run through a model, how much of that needs to be deidentified, and what is the custody of that data throughout.”
Those questions are amplified for businesses using the web interface for ChatGPT, he says, but can also apply to those using the ChatGPT API to connect to enterprise applications and tools.
OpenAI says it does not use API content to train its models unless users opt in, however it does retain that data for 30 days with all customer data stored in the US, potentially in breach of requirements for companies in some other countries.
ChatGPT’s privacy concerns are just the beginning
An indication of how those risks are yet to be fully understood can be seen in the recent actions of the Italian protection supervisory authority Garante per la protezione dei dati personali, known as GPDP or Garante. The authority temporarily banned access to ChatGPT in Italy until the company signalled it was willing to look at a couple of key concerns, including a lack of age verification (although users are supposed to be 13 or older and under-18s require parental permission).
A bigger concern was OpenAI’s potential use of personal information which could be shared in conversations on the platform and potentially be reused or resurface elsewhere through the training process.
Although Garante relented in early May, Baker Tilly Italia Legal lawyer Carmen Dinnella warns few European countries have legal frameworks that can adapt swiftly to the rapid rise of generative AI platforms.
And without confidence that they can comply with regulations, businesses will be slower and more wary about adopting this and other emerging technologies.
“We already have a big difference in the use of these technologies between the large Italian companies and small to medium enterprises,” she says.
“Based on a study from the artificial intelligence observatory of the Milan University, around 61 per cent of the large Italian companies have at least one project based on AI. But for SMEs, only 15 per cent have these kinds of projects and they are usually the ‘simplest ones’ in terms of technological complexity.
“It is not just a technological problem, but a cultural problem and a problem based on the specific skills that are necessary when you have to deal with AI.”
SMEs are more sensitive to the organisational changes that might accompany widespread use of AI, she says, and less able to create the new business models, systems and processes needed to leverage their adoption.
They can also find it harder to align their use of technology with the raft of legislation and regulations that might interact with the use of AI in different contexts.
“We don’t have rules or laws that easily allow businesses to use these technologies,” she says.
“One of the main problems is related to the liability of the AI: it is not clear, indeed, how AI would be considered on the issue of liability.
“In the case of criminal liability, for example, Italian law presupposes that there is always a natural person who commits a crime, either with intent or through negligence. This concept clearly cannot be applied to the AI, with practical, ethical and sanctioning consequences.
“Our copyright legislation is not adequate to be extended to artificial intelligence, because it is strictly related to the presence of a human author or creator as the work constitutes an expression of the intellectual work.
“Then there is the burden of data protection and data privacy. It is very demanding for businesses to comply with GDPR and data privacy rules and we don’t really have the answers for how AI fits into that framework.”
European countries ponder their response
Other European countries have watched the Italian decision closely, with Ireland’s Data Protection Commissioner warning that there are now thousands of similar large language models that had sprung up to offer both the same kind of service as ChatGPT and the same potential risk.
German Digital Infrastructure Minister Volker Wissing took a more pragmatic stance, saying that if more countries followed Italy’s example in banning generative AI, “we will not see any AI applications being developed in Europe.”
But German laws pose their own challenges for corporate use of ChatGPT, including those that relate to intellectual property.
Dr Christian Engelhardt, a partner and attorney-at-law at Baker Tilly Germany, says he is trying to raise awareness of his clients that inputting sensitive information into ChatGPT comes with significant risk.
“Especially for clients who have a bunch of employees who use computers on a daily basis like everybody else, there is the risk that trade secrets or other confidential information could be inputted into ChatGPT,” he says.
“Under German law covering trade secret protection, you have to be able to demonstrate that you took reasonable measures to keep that information secret.
“Allowing people to use ChatGPT or similar software without restriction could be interpreted by a court to be a failure of meeting that requirement.
“To put it in practical terms, if I have a trade secret, I allow my employees to use ChatGPT without restriction, and that secret gets inputted or leaked, I will probably lose a case suing someone for trade secret misappropriation because I wouldn’t be able to show I took reasonable measures to protect it.”
For German companies, that means having heightened awareness about what material might be put into ChatGPT, Dr Engelhardt says, but he believes it is inevitable that trade disputes, copyright breaches and other matters are bound to end up in the courts.
“I don’t think we’re going to see big cases or decisions in the next year but we will once you start to see things partially created by generative AI,” he says.
“When and where, I’m not sure, but I think it will happen and we will also see very different results in different jurisdictions.”
While businesses might be able to hold back the tide of ChatGPT use, they may struggle to constrain the next wave of AI change — tools that incorporate the functionality and power of large language models into ordinary workplace software.
Google Workspace has recently unveiled its plans to incorporate generative AI across its documents, Gmail and slideshow products, while Microsoft is planning to roll out Copilot across PowerPoint, Word, Outlook and Excel.
Mr Upadhyaya says businesses need to plan ahead and be ready for ubiquitous access to AI.
“You get used to seeing amazing things but the thing that really blew me away was the launch of Microsoft Copilot — it just looks incredible,” he says.
“It’s mind-boggling in terms of the implications for everyone to have access to embedded AI in tools you use all the time.”
To respond, he says organisations must have a plan around the type of data that might be released, how they are going to secure it, and whether there is some material too sensitive to be used.
“You have to move fast, now, to get the balance right between leveraging this very powerful platform and managing risk.”
Click here for a case study on how Baker Tilly’s Digital experts are developing multi-stage solutions to address the cost and complexities of working with AI.