Simon Coulthard November 14, 2022
As AI becomes more prevalent in our daily lives, businesses around the globe must dive into the rising landscape of legal and regulatory duties related to the usage of AI systems.
In November 2022, the ICO announced a set of guidelines on how organizations can utilize AI and personal data both ethically and legally, in compliance with the UK's data protection framework.
It is complemented by a number of commonly asked questions about the use of AI and personal data, such as whether impact assessments should be done, whether outputs must adhere to the accuracy principle, and whether organizations require authorization to analyze personal data.
This guidance explains how data protection rules apply to AI initiatives while keeping an eye on the many advantages that such projects might provide, helping organizations reduce the risks that arise specifically from a data protection standpoint.
Read more about this direction on the ICO website.
The ICO's guidance acknowledges that while utilizing AI has undeniable advantages, it can also endanger people's freedoms and rights when data protection is not taken seriously. To this end, their guidance provides a useful framework for how enterprises should evaluate and mitigate these risks.
The guide covers eight strategic elements that businesses can adopt to improve how they handle AI and personal data:
When using AI, you should determine whether it is necessary for the situation. AI is typically considered a high-risk technology when it interacts with personal information. A company requires large amounts of data in order to improve its systems to work properly and our data might get resold, becoming unaware of who is receiving it or how it is being used.
Thus, there may be a more efficient and privacy-preserving substitute.
As the ICO states, you must evaluate the risks and put in place the necessary organizational and technical safeguards to reduce them. Realistically, it is impossible to completely eliminate all risks, and data protection laws do not mandate that you do so, but make sure you:
When a DPIA is legally required, you must conduct one before deploying an AI system, and introduce proper organizational and technical safeguards that will help reduce or manage the risks you find. Before any processing happens, you are legally required to speak with the ICO if you identify a risk that you are unable to adequately mitigate.
According to the ICO, it can be challenging to explain how AI generates certain decisions and results, especially when it comes to machine learning and complicated algorithms - but that doesn’t mean you shouldn’t provide explanations to people.
Here’s what the ICO recommends:
The ICO advises limiting data collection whenever possible. This is not to say that data cannot be collected - it only means that data be managed in a way that meets GDPR standards.
You should:
The accuracy principle for data protection does not require an AI system to be 100% correct. Instead, organizations should ensure that procedures are in place to guarantee fairness and overall accuracy of results.
There are ways in which an AI system can be biased or lead to discrimination. This can create inaccurate, imbalanced datasets, and addressing this issue early is an important aspect of data privacy compliance.
The ICO recommends that you:
As was already mentioned, AI is only as reliable as the data it collects. Therefore, organizations need to make sure that enough resources and time are devoted to gathering the necessary data.
The ICO recommends that:
AI systems have the potential to increase risks or introduce new security vulnerabilities.
When it comes to security measures, there is no one-size-fits-all approach. However, you must abide by the law and put in place the proper organizational and technical safeguards so as to provide a level of security proportional to any risks identified.
The ICO recommends that businesses:
It should be determined early on if the outputs are being utilized to help a human decision-maker, or whether decisions are fully autonomous, depending on the goal of the AI.
The ICO emphasizes that data subjects have the right to know whether choices involving their data have been made entirely on their own or with the help of AI. The guidelines also suggest that they should be meaningfully assessed when they are being used to assist a person.
To make sure that these reviews are meaningful, the reviewers should be:
When a decision has a legal or other substantial impact, data subjects have the right under GDPR to not be subject to it. They also have the right to expect meaningful information about the reasoning behind the decision.
As a result, even if it is stated as a recommendation, human review is nevertheless necessary when AI is making important decisions.
Purchasing an AI system from a third party does not absolve you from the responsibility of adhering to data protection legislation. In most cases, you will be the data controller, deciding how to deploy the AI system.
As a result, you must be able to demonstrate how the AI system adheres to data protection legislation.
The ICO suggests that businesses:
Although AI has the potential to be a valuable tool, as it develops, it also poses a risk to data security and privacy as well as regulatory concerns.
It's difficult to avoid bringing up the General Data Protection Regulation (GDPR) while discussing artificial intelligence (AI) rules. Data is the essential component for AI applications, and the GDPR has had the greatest worldwide influence in terms of creating a more regulated data market.
Check out our GDPR and data privacy hub, which goes in-depth into regulations and compliance.
Gain World-Class Insights & Offer Innovative Privacy & Security
Keep pace with the world of privacy-first analytics with a monthly round-up of news, advices and updates!