Artificial intelligence (AI) presents transformative opportunities for businesses, but it also introduces significant risks, particularly concerning GDPR compliance. In this second article in his AI series, Damian Davies from The Timebank explores the five biggest GDPR risks associated with using AI tools, offering practical guidance on how to navigate these challenges while safeguarding sensitive client data.
When you are looking at AI for your business, you need to be scared.
Really scared.
I don’t mean scared that it will learn what you do, take your clients, take your job, create a universal connected army of artificial intelligence controlling all electrical machines around the globe, mount an insurrection and start farming humanity as an energy source.
I mean, you need to be scared about how you expose your client’s data to it.
In my last article, I introduced the idea that AI has evolved into a tool to handle and manage large amounts of data and language.
It’s not surprising that it has evolved this way, as the companies with the money to invest in AI are the companies that stand to benefit most financially from data and language!
The problem is that these are two of the most sensitive parts of a business, particularly a business so heavily regulated for data protection as financial services.
As such, if you want to use AI, don’t dive into the first tool that is presented to you. You need to consider the risks.
Risk 1 – Know where the data is going when you use an AI tool.
There is no UK law specifically covering the regulation of AI UK data protection law. However, it includes provision for ‘automated decision-making’ and the broader processing of personal data and developing and training AI technologies.
This is where the danger lies, especially when you read what the ICO says in their guidance.
- Firstly, AI agents generally can’t be loaded into your data ecosystem. Instead, you will have to load client data into whatever AI agent or tool you use.
- At the same time, most instances we have seen of firms using AI in financial services are to annotate notes or help formulate writing. A lot of these tools state in their T&Cs that they will retain the information you load in order to better train the model.
These two factors combine to create a considerable risk.
Risk 2 – AI processing will probably mean you need to undertake a “Data Protection Individual Assessment”
Loading confidential client data into an AI tool means the use of AI will involve a type of processing likely to result in a high risk to individuals’ rights and freedoms. This will, therefore, trigger the legal requirement for you to undertake a “Data Protection Individual Assessment” on a case-by-case basis.
In those cases where you assess that a particular use of AI does not involve high-risk processing, you still need to document how you have made this assessment.
Risk 3 – If you don’t consult the ICO and affected clients about how you will use AI, you could breach GDPR
If the result of an assessment indicates a residual high risk to individuals that you cannot sufficiently reduce, you must consult with the ICO prior to starting the processing.
Unless there is a good reason not to do so, you should seek and document the views of individuals whose personal data you process, or their representatives, on the intended processing operation during a DPIA.
It is, therefore, important that you can describe the processing in a way that those you consult with can understand. However, if you can demonstrate that consultation would compromise commercial confidentiality, undermine security, or be disproportionate or impracticable, these can be reasons not to consult.
Risk 4 – You need to document the AI process in order not to breach your GDPR data controller responsibilities
The profession we operate in means we have to consider the GDPR implications much more deeply. It isn’t as easy as deciding to use a programme like cashflow forecasting.
It’s a very considerable risk in your GDPR policy management.
You should document these processes and their outcomes to an auditable standard. This will help you to demonstrate that your processing is fair, necessary, proportionate, adequate, relevant and limited.
This is part of your responsibility as a controller under Article 24 and your compliance with the accountability principle under Article 5(2). You must also capture them with an appropriate level of detail where required as part of a DPIA or a legitimate interests assessment (LIA) undertaken in connection with a decision to rely on the “legitimate interests” lawful basis for processing personal data.
You should also document:
- how you have considered the risks to the individuals that are having their personal data processed.
- the methodology for identifying and assessing the trade-offs in scope and the reasons for adopting or rejecting particular technical approaches (if relevant).
- the prioritisation criteria and rationale for your final decision, and
- how the final decision fits within your overall risk appetite.
You should also be ready to halt the deployment of any AI systems if it is not possible to achieve a balance that ensures compliance with data protection requirements.
We have been looking into some of the off-the-shelf AI solutions that firms are taking up, and it is a struggle to see how some can pass a due diligence assessment.
Risk 5 – Assessments of the AI tools you use is an ongoing responsibility, not just at the point of purchase.
When you buy an AI solution from a third party, you need to conduct an independent evaluation of any trade-offs as part of your due diligence process. You are also required to specify your requirements at the procurement stage rather than addressing trade-offs afterwards.
Recital 78 of the UK GDPR says producers of AI solutions should be encouraged to:
- take into account the right to data protection when developing and designing their systems and
- make sure that controllers and processors are able to fulfil their data protection obligations.
You should ensure that any system you procure aligns with what you consider to be the appropriate trade-offs.
If you are unable to assess whether the use of a third-party solution would be data protection compliant, then you should, as a matter of good practice, opt for a different solution.
Since new risks and compliance considerations may arise during the course of the deployment, you should regularly review any outsourced services and be able to modify them or switch to another provider if their use is no longer compliant in your circumstances.
The GDPR risk is the biggest obstacle currently , but careful consideration and not rushing in can help prevent a risk scenario from blowing up.
At The Timebank, we have been exploring AI tools for a couple of years now.
In fact, we have set up a lab called The Engine Room, where we are testing various tools on live cases. If you are intrigued, you are welcome to log an interest here.
Please feel welcome to contact us through there, and we can provide you with a DPIA assessment blueprint.
In the next article, I will unpack some of the tools you might want to start researching, albeit with a now-educated eye on the GDPR risks to your business.