Skip to main content

This article discusses the legal implications that you will face if you are developing, deploying or using your in-house AI system. What kind of legal obligations do you have to follow, and what open questions do you need to explore?

The AI boom of 2022

Where is the hype for AI coming from? The popularity of ChatGPT starting at the end of 2022 brought an unexpected attention to this technology, leading to a huge wave of tech giants’ investments. It led to multiple companies accelerating their own chatbot technologies based on Artificial Intelligence

One of the reasons around the sudden success of this technology is the very much human-like answers that these chatbots can give. The secret behind this is the ability to predict which words are more likely to follow each other, which gives the impression of talking to another human.

The AI Act

If you are implementing or developing an AI system, you will face certain regulations, like the world’s first major act regulating AI, the new European Union-wide regulation, called the AI Act. This Act will be applicable to you if your company:

  • develops or uses AI in the EU, 
  • offers AI system solutions to people within the EU, 
  • or has an AI system that processes data about individuals within the EU.

The AI Act is the first EU-wide regulation that is regulating the application of AI technology. While AI has many benefits, it has its own dangers as well, so the AI Act is setting up four categories based on risk implications to democratic values: unacceptable risk, high risk, limited risk, and minimal risk.

Unacceptable risk: The AI systems that fall into the unacceptable risk category are entirely forbidden. No exceptions can be made. 

High risk: AI systems that belong to the high risk category are not forbidden, but regulated. The AI Act focuses mainly on this category. 

Limited risk: The third category is the limited risk category, which comes with even less stringent obligations. 

Minimal risk: Last but not least, the AI systems with minimal risk, in which case no danger to society is perceived, are not regulated within this Act. 

The AI Act, as the first EU regulation governing this area, establishes five principles:

  1. Security (For example the risk of data poisoning and model poisoning
  2. Prohibition of discrimination (Importance of training data used during the learning and the development of the AI)
  3. Traceability (Task of the European Commission is to follow the effects of the new regulation- they created the new European AI Office within the EU Commission)
  4. Transparency (In case of a developer: proper documentation)
  5. Equity (AI systems are used in many areas from employment processes to medical diagnosis. We have to be aware that they can perpetuate social, racial, gender or other inequalities. You have to aim to address and avoid AI bias within your AI system.)

A few criteria to keep in mind

The future of the AI Act holds some exciting questions to discover. Will the creation of the AI Act lead other countries to come up with their own major regulations? Will it establish a standard for all future non-EU regulations? If so, will they exceed the level of caution that the AI Act includes? 

With developing, deploying or using an AI system, you will face some criteria that you have to keep in mind to comply with all the applicable legal requirements. You must identify which risk category your AI system falls into before development starts. If you know that, you will easily navigate through the applicable rules. This way you can assure that your AI system won’t cause unexpected legal problems. You should also make sure to log all activities during the development and training process in order to ensure that your in-house AI system is fulfilling all criteria that are derived from the principle of transparency.

Always aim to be fair to users and avoid unnecessary risk. Aim to be transparent, which means users must be aware that they are interacting with AI, and not with a human. But is it enough to include this information in the privacy policy? If you are aware that the majority of the users are not reading privacy policies, should you have a more direct approach of informing them? Is it necessary to make sure that the transparency of your AI system is aligned with everyday practice, and you are not fulfilling it only on a theoretical level? 

The question would arise naturally in connection with ethical problems: since chatbots themselves are amoral, as they are tools, how can they become unethical? What makes the difference between an ethical and unethical AI system? During the design and training process of an AI system, human decisions are involved, which will affect the operation of the chatbot. So the existence of a moral AI system depends on the involvement of the highly moral individuals participating in its creation.

What else should you consider with your in-house AI system?

Let’s review some of the open questions that you should consider while dealing with your in-house AI system. As in every professional relationship, you probably also face some contractual requirements towards your clients, that you are obliged to comply with. The question here is how can you meet all the confidentiality obligations that are binding for you towards your client when using chatbots? Many, but not all chatbots can use the data you are entering for self-training purposes and might disclose the entered information in their responses. How can you ensure for your clients that their data is still safe with you? In this question, if you are operating within the EU, GDPR rules must apply.

As in every case, you must also pay attention to Cybersecurity issues. Due to its human-like nature and the realistic conversations AI is capable of, there is a risk of using it in a malicious way. Cybersecurity issues can also arise from purposefully training the AI system in a malicious way.

With the risk of disinformation, many additional questions can be raised: as already experienced in the case of commercial systems, it is possible that your chatbot will give ‘confident’ answers that are incorrect. How do you make sure that your in-house chatbot system’s answers are up to date? What would happen if it turns out that they are not? How can you maintain accuracy, especially if your AI system is not ‘internet based’, so it is processing information/data available only until a certain date? If your customers are using your chatbot, then do you have the responsibility to draw their attention to the fact that the chatbot’s answers need confirmation before getting incorporated into their work, product or decision? 

Another area of open questions is the topic of Intellectual Property Rights. How do you prevent your AI system from using any third party data or material for its training that you are not authorized to use so that it does not infringe others’ intellectual property rights? The other side of the coin: if your in-house AI-based chatbot is creating anything, would you be the owner of the IP rights? Would it be the chatbot? Or no one, because only a human can have original ideas, and create original work which is the condition of owning Intellectual Property Rights? Who will own the work created by your AI system? 

From the aspect of the employer-employee relationship: what happens if a work that is supposed to get done by an employee, is getting done by an AI system?

If you are using your in-house AI system for giving advice (for example legal advice, drafting agreements, etc.), who will be responsible for the claims arising from this? Will you be obligated to indemnify the customer who relied on your chatbot's advice? 

And last but not least, some ethical considerations: whose responsibility it is to create an ethical AI? As many parties are involved in the process, is there anyone to take the blame if your AI system is unethical in its interactions? Is it the developer? The company that hires the developer? But in case you don’t want to develop your own chatbot, and you hire a company to do it for you, then is it you, as the customer? Or should users have the responsibility to use only ethical AI? Should it be regulated on a governmental level? 

Closing thoughts

In this article, we gave you some ideas of the legal implications that you will face when you are developing, deploying or using your in-house AI system. We hope that we could help you or give you an insight to this small but very interesting segment of Law.

alt

Not all AI-driven chatbots are created equal. At Pronovix, we have considered privacy and data storage issues well before the AI Act, and we continue to develop solutions to address them. Together with our hosting partner Amazee we are working on a proof-of-concept of a fully private Retrieval-Augmented Generation (RAG) chatbot that sources answers only from the content of a developer portal. This solution is expected to side-step or obviate many of the issues raised in this article. Please get in touch with us if you are interested.

 

 

 
All Pronovix publications are the fruit of a team effort, enabled by the research and collective knowledge of the entire Pronovix team. Our ideas and experiences are greatly shaped by our clients and the communities we participate in.

Zsófi is an Assistant Counsel at Pronovix.

Ákos had started his career as a Drupal developer, and with his experience in Linux server management, he subsequently led the infrastructure and architecture planning of web portal projects as a technical project manager. Having acquired degrees in common law, English legal translation and EU data protection (GDPR) as a consultant, he has been piloting Pronovix’s Legal Team and contracting. As a Chief Information Security Officer, he is leading Pronovix’s information security and data privacy efforts.

Editor's Choice

Electronic network comes out from a factory

APIs are interface utilities

by Kristof van Tomme
Categories:
Business Value and Strategy

Newsletter

Articles on devportals, DX and API docs, event recaps, webinars, and more. Sign up to be up to date with the latest trends and best practices.

 

Subscribe