Elizabeth Blosfield (Host):
I know there’s been a lot of talk recently about the US government’s AI Action Plan. Could you break down some of the key elements you see there and how it differs from what’s happening in the EU?
Claire Davey:
They’re two different concepts. The AI Action Plan is more of a pitch and a forward-looking goal, whereas in the EU, what we have with the AI Act is more of a regulatory framework. They’re trying to achieve slightly different things.
From a US perspective, Trump’s AI Action Plan is really to position the USA as a global leader in AI — not just AI innovation, but also in terms of building out its infrastructure. He also makes a point in that document around diplomacy and security. They really want to be leading the globe in all things AI and hope that has a positive impact on growth for their economy.
A closer analogy to that would be the UK’s AI Action Plan, released in January 2025. It had a similar concept, boosting economic growth and the UK’s position through initiatives like AI growth zones, particularly focusing on infrastructure and also data that could be sold to increase the UK’s position and finances. Those two items are quite similar.
EB: That makes a lot of sense. And I know at least in the US there’s been a lot more focus on this so far at the state level and some US states are already pushing their own AI regulations. So, do you see those state level rules as helping or possibly creating more challenges for startups and innovation in the space?
CD: I understand why those states are creating their own regulatory frameworks in the absence of federal direction.
From a startup’s perspective, regulation is always more difficult to navigate because you don’t have the resources internally or financially to dedicate huge amounts of time to becoming experts on that particular regulation and ensuring that you remain compliant with it.
It’s additionally complicated for startups in the US if they have to navigate multiple different states’ regulatory frameworks. Interestingly, linking back to your previous point around the AI action plan from the US, one of the comments in there was that they would actually try and move investment away from those states that have already tried to impose regulation. So, they’re trying to incentivize deregulation.
From a startup’s perspective, yes, it is daunting. And there’s also this uncertainty of, well, if I try and be compliant to this regulation this year, are there going to be others coming along that are going to impact my business model?
I would challenge that in the sense that to remove any form of regulation in the United States for AI is very helpful if you want your business or technology to remain used and deployed in the US. But this is not going away across the globe and all countries are going to come to the table with a regulatory framework.
So, if the startup or scale up wants to be able to sell this technology internationally to international users, then they’re going to have to find a way to build a governance framework and compliance into their technology.
EB: That makes a lot of sense, and it’s a good outline of the challenges with AI regulation right now. As you mentioned, increased regulation can create a heavier burden for startups, but in the US it’s especially difficult with so many state-level rules. Do you think emerging AI regulations will create a heavier compliance burden for early-stage startups, or could they help level the playing field with larger tech players that have a more uniform approach?
CD: No, is my answer to that question. Ultimately, the larger players have far more resources to dedicate to this. Even if they’re subject to the same regulations, they’re going to come out ahead of the game compared to a startup.
One of the additional challenges is that if there is uncertainty around the regulatory environment and a startup is trying to get external investment for their technology or business, and there seems to be regulatory risk, then investors may be less willing to commit funds because they are concerned there won’t be such a return on investment if there are regulatory actions.
So that’s a challenge unique to startups. The more established businesses — not to name those — have a pool of cash and don’t need to worry about those sorts of considerations.
EB: That makes a lot of sense. Speaking of established players, we’ve focused a lot on startups so far, but what about larger, more established insurance carriers? How is AI regulation likely to affect them? Will they need to make operational adjustments, or could it fundamentally cause them to rethink their risk models?
CD: Insurers are already adopting AI in many contexts, whether that’s to analyze large datasets or through chatbots on websites. We also see applications being used to guide customer decision-making, particularly around health risks.
What we’re going to see with regulation is a bigger focus on issues like bias and discrimination, as well as privacy and transparency. In insurance, the risks covered — especially at the consumer level — are very personal. It’s financial information and health information. That’s going to raise significant concerns for regulators about how insurers are using sensitive data within AI technologies.
That’s why we’ve seen movement from bodies like the Information Commissioner’s Office in the UK, and also the Financial Conduct Authority, which has offered a sandbox environment where insurers can safely develop AI technologies. Interestingly, that idea was also adopted into the US Action Plan you referenced earlier.
Regulators are aware this is going to become a bigger problem and are working to get their arms around it. But I’d also say it’s not necessarily new for insurers to be using AI.
EB: You’ve spoken a lot about silent AI risks. Can you explain what silent AI is and why industries like insurance should be paying close attention to it?
CD: There are two concepts here. First is silent AI risk within a business, where you can’t see AI performing its tasks — whether that’s agents in a cybersecurity sense or machine learning data being inputted.
Then there’s silent AI risk as the insurance industry understands it, which is similar to silent cyber around 2017. This means insurers are committing to policies with policyholders who may be using or developing AI technologies. But insurers aren’t necessarily asking the right questions to understand how these technologies are being used, governed, and controlled.
It’s also uncertain in most insurance policies whether that exposure is covered. What we don’t want to see — from either an insurance or regulatory standpoint — are disputes over coverage leading to court cases.
Ideally, innovative startups and tech developers should feel their innovation is resilient and supported, with a safety net if things go wrong from a regulatory perspective. That’s why it’s helpful when insurance policies provide clear terms.
EB: That clarity is important, especially since AI spans such a broad range of areas. We’ve talked about regulatory uniformity, but insurers also need consistency in how they deploy AI tools. Given everything we’ve discussed, what is your advice to established insurers and insurtech startups for staying on top of current AI regulatory trends? What are the most important steps they should be taking right now?
CD: It’s moving fast and difficult to keep up because every day there’s a new story. There are some great courses and qualifications out there. The AI Governance Professional course from the IAPP is very good — I’d recommend it for staying up to date and for CPD.
Within a business, the most effective way to handle this is to strategize how AI will be used, ensure its implemented correctly and safely, and be ready to react to changes. The best way to do this is through a multidisciplinary committee overseeing these projects — whether that includes a lawyer, an IT security professional, or people from the teams using the AI technology. Having multiple stakeholders involved ensures informed opinions.
That’s definitely the best practice we’re seeing at the moment, and my top tip so far.