Finance sector in danger of over relying on AI as 91% businesses not ready for risks

77% of global businesses are pursuing the use of artificial intelligence. Despite global adoption, a staggering 91% of organisations admit they are not ready to manage risks posed by AI.

Application security SaaS company Indusface, gathered insights into UK businesses to determine the sectors most at risk of AI dependence.

Adding to the findings, Venky Sundar, Founder and President of Indusface, shares his insight around the dangers of relying too much on AI in the business world.

Which UK industries are using AI the most?

 
 
RankIndustryBusinesses currently using AI (%)Businesses planning to use AI (%)
1Legal30%31%
2IT and Telecoms29.5%27%
3Finance and Accounting26%20%
4Media, Advertising and Sales20%21%
5Transport and Distribution17%16%
6Manufacturing16.5%14.5%
7Education15.5%15.5%
8Construction12%15%
9Real estate12%14.5%
10Hospitality and Leisure11%9%

*Figures according to Forbes Advisor 2024 AI Report

These industries are reaping significant benefits from AI, but as reliance grows, so do the risks.

The legal sector is one of the most prominent industries using AI, with the industry revolutionising legal research, analysis, and due diligence, but is the sector moving too fast?

According to the survey, almost a third (30%) of law businesses across the UK state they have already adopted AI, enhancing the quality of their legal services. 

 
 

IT and telecoms rank in second with 29.5% of UK businesses in this sector having adopted the use of AI.

Benefits for this sector include network optimization, predictive maintenance, customer service automation, security, and data analytics, among other applications. 

AI also plays a pivotal role in the finance and accounting sector, with more than a quarter (26%) of UK businesses in this field already utilising AI for tasks in the workplace. 

The finance sector uses artificial intelligence for fraud detection, credit risk assessment, financial reporting, tax preparation and customer service. Could AI’s role in financial services introduce new vulnerabilities?

 
 

It’s clear to see that using AI in business offers numerous benefits, however, these sectors could be at risk of AI dependence, with almost six in ten (59%) Brits admitting they are concerned about the use of artificial intelligence, according to Forbes Advisor research.

Venky Sundar, Founder and President of Indusface shares his insight on the risks of using AI in business:

Any sectors powered by tech as a key element of their business, from farming to finance are at risk of relying too much on AI

“Anything data heavy, requiring analytics to stay relevant (consumer engagement and insights in ecomm and retail, credit and risk rating, and consumer understanding will see faster adoption of AI than other sectors)

“The risk though is that POC (proof of concept) should just be used for that purpose. If you go to market with the POC, there could be serious consequences around application security and data privacy. AI could give you code that is vulnerable to attacks, as secure coding practices are rarely followed, and you can bet that most code that AI has been trained on would be vulnerable to attacks.

“The other risk is with just using LLMs (large language models) as an input interface for the products. So far, the inputs that you could provide the software with were very limited, as form fields such as text boxes or drop down lists controlled most inputs into the software. But using AI to read and interpret a prompt could actually remove this predictability. This can lead to serious consequences and early risks of these “prompt injections” are being captured by OWASP (the Open Worldwide Application Security Project).

How can businesses overcome this risk? 

“Whilst AI can help businesses conceptualise and build code, AI should still be used as a sidekick, not a dependent, especially for software development.

“Before companies go to market, we would still recommend following the same best practices of 1) vulnerability scanning 2) penetration testing 3) patching and 4) WAAP implementations to thwart attacks.”

“Since LLM injections are a threat, the knowledge base used to build productivity use cases, and the knowledge base used to build defence use cases on what’s not acceptable have to be separate sources that need to be trained and updated continuously.

“There needs to be oversight from human beings, and the people maintaining these datasets have to bediverse so that the data sets are diverse enough.”

Related Articles

Sign up to the IFA Newsletter

Please enable JavaScript in your browser to complete this form.
Name

Trending Articles


IFA Talk logo

IFA Talk is our flagship podcast, that fits perfectly into your busy life, bringing the latest insight, analysis, news and interviews to you, wherever you are.

IFA Talk Podcast – listen to the latest episode