Article

Risk Wrap 037: AI Risks in Finance, Biotech Regulation, CEOs’ Cyber Concerns, Crypto in Retail, UK Gambling Rules, and Agentic AI Oversight

HOW USEFUL WAS THIS POST? RATE, LEAVE A COMMENT REQUESTING CHANGES, AND WE’LL AMEND ACCORDINGLY.

From cyber threats to gambling regulation, this edition of Risk Wrap highlights six developments shaping compliance, governance, and insurance exposure across high-risk industries.

Financial Firms Face Growing Liability for AI Systems They Don’t Control

Many AI models operate through opaque, non-linear processes that are difficult to interpret or explain, complicating efforts to monitor performance and conduct forensics.

Most financial institutions use Large Language Models (LLMs) from external companies to power risk assessment, customer service chatbots, and other functions rather than building their own from scratch, which intensifies the problem.

Firms may not know how these models were trained, what data was used, how they execute inferential logic, or how updates and retraining could affect output over time. As a result, they may be held responsible for decisions driven by systems they can’t fully audit or govern.

Even though governance can only go so far when black-box AI is involved, regulators are still demanding it. In response, some firms are demanding greater transparency from AI vendors with a view to accessing model documentation. Others are developing test frameworks for validating third-party models.

Simpler, more transparent models may reduce risk, but they also reduce predictive power. Some firms are willing to make that trade-off, combining foundational LLMs with more interpretable models.

Implications for brokers and their clients:

  • Review the wording of D&O insurance policies to verify whether they explicitly respond to claims alleging inadequate oversight of AI systems.
  • Review tech E&O policies to see if they explicitly address losses arising from AI-driven decisions.
  • Assess whether policies adequately reflect vendor risk, including failures, updates, or limitations in third-party systems that could trigger downstream losses or contractual disputes.

Source: Thomson Reuters (January 13, 2026). Managing AI models’ opacity and risk management challenges.

 

Biotech Firms Brace for Tighter Global Rules

Biotech companies around the world are facing tightening regulations. The FDA’s Quality Management System Regulation has been updated so that the US Current Good Manufacturing Practices will be harmonized with global standards. The new framework will come into effect on February 2, 2026.

Changes to manufacturing facilities for Class III PMA devices may soon require new FDA submissions. Operational changes may become more complex and time-consuming, bringing uncertainty to product development timelines.

In China, relocating production can trigger burdensome re-registration and compliance challenges with local authorities. Manufacturers will need to perform regulatory gap analyses, put together transition teams, and be prepared for post-market surveillance.

The FDA has also announced plans to release eight new guidance documents in 2026 that would cover topics related to digital health products, like AI and patient preference information. China’s National Medical Products Administration and National Health Commission have also announced changes to guidelines on these products, with emphasis on algorithm transparency and data security.

Implications for brokers and their clients:

  • Investigate product liability insurance tailored to biotech companies that protect against claims arising from device failures, defects, or adverse patient outcomes, including legal defense costs and settlements.
  • Investigate business interruption insurance to mitigate the impact of disruptions caused by regulatory holds or inspections.
  • Investigate D&O insurance for biotech firms that protect against investor claims tied to regulatory action.

Source: JD Supra (January 9, 2026). JPM 2026: AI Compliance and Biosecure Manufacturing Outlook.

Executives Warn AI and Fraud Are Now Outpacing Ransomware Risks

The WEF’s 2026 Global Cybersecurity Outlook survey asked executives from 92 countries about their view of cyber risk. It included 804 participants, including 316 CISOs, 105 CEOs, and 123 other C-suite executives. Here are some highlights:

  • 87% of respondents said AI vulnerabilities have increased in the past year.
  • 77% have seen an increasing risk of fraud and phishing.
  • 72% said they or someone in their network had been personally affected by cyber-enabled fraud during 2025.
  • Only 54% considered ransomware-related risks to be rising (CISOs still considered it the leading risk).

According to WEF Managing Director, Jeremy Jurgens, the acceleration of cyber risk in 2026 is “fueled by advances in AI, deepening geopolitical fragmentation and the complexity of supply chains.”

Implications for brokers and their clients:

  • Firms across sectors should investigate cyber liability insurance and crime coverage that explicitly responds to AI-related risks.
  • Business interruption insurance may help offset revenue loss and extra costs incurred when cyber incidents disrupt operations or third-party services.
  • Investigate media liability insurance to cover the costs associated with reputational damage in the event that a cyberattack occurs.

Source: Computer Weekly (January 12, 2026). Business leaders see AI risks and fraud outpacing ransomware, says WEF.

 

Irreversible Transactions and Limited Dispute Paths Stall Crypto at Checkout

Despite consumer interest in using digital assets at checkout, retailers are hesitant to support this payment option. Of the minority that do, most are implementing pilot programs rather than full-scale implementation.

Unclear liability is at the core of their hesitation. Traditional payment methods assign predictable accountability if something goes wrong. With crypto, that’s not the case. A transaction sent to the wrong address is irreversible, and there’s often no established path for dispute resolution, leaving merchants exposed to losses.

To accept crypto payments, a digital wallet must be embedded directly into the merchant’s checkout process. Even when the wallet infrastructure is provided and operated by a third party, it appears to the customer as part of the merchant’s own payment experience. As a result, customers perceive the wallet as belonging to the retailer, not the actual provider.

If something goes wrong, the customer’s complaint and loss of trust are directed at the company that sells the product. In other words, the reputational and customer service risk sits with the merchant, regardless of who operates the wallet.

Compliance is also a gray area. If a customer’s wallet is later linked to illicit activity, there’s no standard investigation framework for retailers to follow.

Wallet providers that can assume visible responsibility for custody and compliance are better positioned to earn retailer trust.

Implications for brokers and their clients:

  • Tech E&O insurance may help cover claims arising from wallet malfunctions, transaction errors, system outages, or integration failures that cause financial loss to merchants.
  • Digital asset insurance helps crypto firms manage exposure from compliance breaches and sanctions screening failures.
  • Review whether professional liability insurance responds to claims that the wallet provider failed to meet contractual or regulatory obligations. Bespoke coverage from specialized providers may cover claims that traditional policies exclude.

Source: FinTech Weekly (January 7, 2026). The Checkout Paradox: Why Retailers Still Don’t Trust Crypto Payments.

 

UK Gambling Operators Hit With New Restrictive Rules

The UK’s online gambling sector is set to contract as operators react to tighter rules from the Gambling Commission (as well as rising taxes). Some companies have scaled back marketing and paused customer acquisition efforts. Others are planning to withdraw from the UK market entirely in the months ahead.

The changes align with the ongoing emphasis on safer incentives for players. Wagering requirements on bonus funds are now capped at 10 times the bonus amount. This improves transparency for players compared to the more complex structures that were allowed before. In addition, mixed-product promotions that combine bets across multiple games are now banned.

Implications for brokers and their clients:

  • Partner with insurance providers that have expertise in UK gambling regulation as the landscape continues to change.
  • Review whether tech E&O policies respond to claims of non-compliance with the new rules when violations result from technical faults.
  • With growing emphasis on player safety, secure comprehensive player liability insurance.

Source: Sigma World (January 7, 2026). UK gambling market faces licenced operator exodus: Report.

 

The Security Nightmare of Poor Agentic AI Oversight

A survey by Okta, an access management solution provider, shows that just 10% of the 260 participating organizations have a well-developed identity management strategy for both human and agentic identities. With many breaches involving compromised identity, this is a key risk for any company operating agentic AI.

Agentic AI systems often have broad access across networks and applications. Without intentional strategies in place, agentic activity may evade the conventional controls designed for humans, creating opportunities for privilege escalation and impersonation if an agent is compromised or manipulated. The fact that agent actions are often hidden behind a human identity makes it almost impossible to audit them.

The autonomy of agentic AI increases the likelihood that errors or malicious manipulations will propagate across systems before detection or intervention is possible.

Organizations must also consider memory poisoning. Agentic systems have persistent memory, unlike the stateless operations of traditional applications. As a result, misleading inputs can influence future decisions, causing repeated failures. Research by Anthropic shows that it can take as few as 250 malicious documents to poison an LLM.

Agentic AI environments also increase supply chain risks. The fact that they often rely on external APIs, libraries, and datasets (combined with their opacity) makes it easy for attackers to gain access to an entire system after compromising only one component.

Implications for brokers and their clients:

  • Review whether tech E&O policies specifically address losses caused by autonomous AI agents taking unintended actions like executing erroneous transactions or modifying system configurations. Traditional E&O coverage may not fully capture these automation risk profiles.
  • Investigate coverage that protects against losses arising from compromised external systems, including risks tied to poisoned training data.
  • With the possibility of increased forensic costs due to memory poisoning, ensure coverage includes incident response costs.

Source: Security Boulevard (January 16, 2026). The Cybersecurity Risks of Agentic AI: What Security Teams Need to Know.

How useful was this post?

Click on a star to rate it!

What can we improve?

More you might enjoy…

Scroll

View All