The rise of AI IFAs – What are the risks? 

I

Written by Tom Llewellyn, Partner in the Commercial Disputes Team at Ashfords

AI is going to affect every aspect of every business, including that of financial advice. Of that there can be no doubt. Whilst the US and the EU plan to regulate AI, currently the UK does not, instead taking the view that current legislation can be applied to this nascent technology. 

So what does this mean for financial advice firms? The Information Commissioners’ Office and the FCA have given a lot of guidance on what they expect advice firms to consider when developing and using AI. Whilst this guidance is relatively high level due to AI being an emerging technology, expect more detailed guidance to be provided in the future even if the underlying legislation does not change. 

The ICO’s starting position is that data protection and privacy should be built in from the start when developing new products and services. They should not be an afterthought to the technology itself. Similarly, the FCA considers that we are at a pivotal junction in how AI is approached, that beneficial innovations will only materialise via regulation, and that there is a responsibility to ensure safe and responsible adoption of AI in financial services. This can only be done at the outset. 

So what are the risks and opportunities? 

We are too early in the adoption of AI to have any comprehensive data on the impact of AI in financial advice, particularly with regard to increasing or reducing the risk of litigation, but the risks and opportunities are clear. It wouldn’t be hyperbole to say that AI has the potential to transform the way we manage finances and financial advice, and aid compliance with the new Consumer Duty. AI could be used to review risk questionnaires and investment portfolios, and then make recommendations on investment strategies and portfolios based on that information, reviewing all available products and funds to find the perfect investment strategy. An AI tool could review far more data, and therefore undertake a far more thorough analysis, than could be done by any one person. 

Similarly, AI can monitor the performance of investment portfolios on an ongoing basis, assisting with the provision of ongoing advice. AI can also help with the make-up of funds and the ongoing management of portfolios themselves. Similar to how there are active and managed funds, we could see a rise of AI managed funds. 

Taken together, this could lead to better consumer protection, better advice, increased competition and therefore lower prices for consumers, and also reduced financial crime. 

However, there are significant risks. First, AI needs data to learn from. Data must be inputted as a base from which an AI tool can learn, and data will be inputted and learnt from on an ongoing basis too. That data will be the personal data of individuals, giving rise to an inherent risk of a significant amount of sensitive financial data being compromised if a cyber-attack or data breach occurred. Both the ICO and FCA require firms to ensure that adequate security is in place to protect personal data, as well as having suitable disaster recovery plans and protocols. 

One answer to this risk, aside from a high level of cyber security, might be to anonymise all data inputted into an AI tool, with identifying data only being added in manually once a report has been created. Whatever approach is taken, data protection principles need to be fundamental to an AI tool being developed. This can be built in from the outset with a proprietary AI tool, but there would be an inherent risk with opensource tools or ChatGPT. 

A further consideration is that any training and use of AI will require the consent of the data subjects to their personal data being used by an AI tool. Whilst not related to financial advice, Clearview AI recently escaped significant financial punishment for using personal data to train an AI facial recognition tool. It was only because the company was outside of the jurisdiction of UK GDPR that a fine was avoided, and only then on appeal having initially been fined £7.5m. However, Clearview’s use of personal data would otherwise have compromised UK GDPR. Also, just because you have consent to use personal data for providing financial advice, does not necessarily mean that there is consent for use in, or training, an AI tool. 

Furthermore, as we have already seen with the use of AI in other professions, AI can give phantom answers or be subject to biases during the coding process. It could also just make a mistake. Whilst AI can improve advice given, there still needs to be checks to ensure the accuracy and suitability of the advice, and to protect against the risk of discrimination. 

Further problems can arise where the AI tool used is third party software. Who is responsible for negligent advice – the software provider or the advice firm? If multiple advice firms use the same AI tool, there is also a risk that it duplicates materials between firms or uses one firm’s confidential information for the benefit of a competitor. 

Highlighting the risks that an AI tool can present should not be taken as a warning not to develop and use AI tools; to the contrary they can change all industries for the better. The key message is to ensure that the risks are considered and accounted for at the outset, to minimise the risk to both the firm and the consumer. 

For particularly complex data protection concerns the ICO will consider sandbox applications to help engineer privacy into the design o

Related Articles

Sign up to the IFA Newsletter

Name

Trending Articles


IFA Talk is our flagship podcast, that fits perfectly into your busy life, bringing the latest insight, analysis, news and interviews to you, wherever you are.

IFA Talk Podcast – listen to the latest episode

IFA Magazine
Privacy Overview

Our website uses cookies to enhance your experience and to help us understand how you interact with our site. Read our full Cookie Policy for more information.