Skip to main

You are here

Round-table: Is AI really ready for the market and have the threats been mitigated?

Round-table: Is AI really ready for the market and have the threats been mitigated?

This week, IT Europa attended a round-table on artificial intelligence in business at the Gherkin in the City of London, to hear the thoughts of executives at leading companies on the subject, not least Sarah Armstrong-Smith, chief security advisor at Microsoft.

While some technology companies think generative AI - and the large language models (LLMs) that are trained to deliver the business insights everyone is after - should be covered by new legislation to alleviate concerns around privacy, copyright, data security and potentially lost jobs, among other issues, some don’t.

For one thing, many of the potential negatives arising from AI are probably covered by existing legislation, particularly when it comes to privacy and copyright. And companies opposed to new legislation also think it’s against the way we normally do things around new fledgling technology that is still being developed.

At the OpenText customer and partner event in London earlier this month, Mark Barrenechea, CEO and CTO of the $6 billion software services vendor, said he was against fresh legislation on LLMs. “You have to let the horses run in an unregulated market that governments don’t understand. We are selling AI tools to enterprises that need them now,” he said.

Barrenechea stressed he understood that privacy and copyright dangers had to be addressed, for instance, but maintained any potential fall-out from AI can be addressed by existing laws that companies can still work to.

So what did our round-table think of the current state of play around AI?

Simon McDougall, who was once deputy in the UK’s Information Commissioner’s Office (ICO), and who is now chief compliance officer at data services firm ZoomInfo, said: “With AI, we are now probably in a similar position to where we were with the early cloud market. There was a rush to the cloud, and then some customers started to ask where their data was actually located.

“They were told not to worry about it, but they did, so the US cloud providers created a European cloud to help address those concerns here.”

He added AI had created more concerns around data, and assurances were now needed.

The round-table was hosted by Vanta, a trust management platform, that will now have to deal with the security and compliance challenges of AI.

Jadee Hanson, chief information security officer at Vanta, said: “AI has uses for good, and not so good. Criminals, for instance, can use AI to create perfect scam emails, when before they were often poorly-worded. They can also create video deepfakes of CEOs, to fool staff and other companies as part of fraud.”

On data provenance, Sarah Armstrong-Smith, chief security advisor at Microsoft, said the company had already signed up to international agreements when it came to making it easier to counter deepfakes, and to be able to check what changes had been made to data being presented.

She said industry discussions were ongoing around these issues, but said BigTech did need “code of conducts” around AI. That said, Armstrong-Smith does not believe AI needs to “slowed down” when it comes to LLM development. Some in the industry have previously called for a six month moratorium on development.

“Brad Smith, our vice chair and president [and previously general counsel of Microsoft] says we need to speed up on AI, not slow down, that won’t work.

“Due diligence and compliance has to be done, yes, LLMs still hallucinate and show bias, for instance. But with generative AI we have expanded the parameters to allow the software to create content itself, and we are working with other companies to make it work.”

But, Armstrong-Smith said, other companies weren’t co-operating with others to improve matters, “which is what we have to realise and be careful about”.

She added: “Security bodies and privacy advocates can’t be allowed to stifle progress though. And security departments can actually be used to enable progress for new technology like AI, instead of being a block on it, as in the past.”

An AI Act from the European Parliament [https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai] is already in play, and there has been an AI Executive Order from the US president, which might re-assure some companies about perceived AI threats.

Vanta’s Hanson said: “In our research, over half of companies tell us they would adopt AI if it was regulated, but it’s not something that can be regulated as easy as other areas.”

Armstrong-Smith said: “The UK Financial Conduct Authority says they are going to have to regulate AI in financial services, but they admit they don’t totally understand it, so this is where the technology industry comes in to have the discussion with them.”

Hanson added: “As they say, the truth will set you free, and with AI, it’s transparency that will set you free.

“Customer data is needed to train AI models, but customers should be able to opt out. And they should be able to know what’s going on.”

McDougall chipped in: “But how do customers get the personalised service they want. Too much regulation could put sand in the wheels – it’s about getting somewhere in the middle.”