HOW USEFUL WAS THIS POST? RATE, LEAVE A COMMENT REQUESTING CHANGES, AND WE’LL AMEND ACCORDINGLY.
From blockchain developer liability to gambling regulation, this edition of Risk Wrap highlights six developments shaping compliance, governance, and insurance exposure across high-risk industries.
Will US Blockchain Developers Finally Be Safe from Criminal Liability?
On February 26, the Promoting Innovation in Blockchain Development Act was introduced to Congress by Representatives Scott Fitzgerald, Ben Cline, and Zoe Lofgren.
The bill aims to clarify that developers of non-custodial blockchain software shouldn’t be treated as financial intermediaries if they don’t control users’ funds. To be classed as a money transmitter, businesses must exercise “control over currency, funds, or other value that substitutes for currency.”
Clarifying the distinction between developers and entities that truly control digital assets may help reduce the legal uncertainty that threatens innovation and pushes developers offshore.
Implications for brokers and their clients:
- While non-custodial developers may soon have further protection, firms should still carry tech E&O insurance that protects against claims arising from coding errors, smart-contract failures, or software vulnerabilities.
- Review D&O coverage to protect leadership from lawsuits tied to compliance decisions or alleged misclassification of services.
- Firms providing crypto custody services should investigate dedicated crypto custody insurance.
Source: Bitcoin.com (February 28, 2026). The Promoting Innovation in Blockchain Development Act Gets Introduced in Congress.
Emerging insurance industries mentioned: Digital Asset and Web3 Insurance.
Missouri’s AI Crackdown Targets Deepfakes While Rethinking Liability
Missouri lawmakers are preparing to tighten regulations around AI, with a particular focus on deep fakes. Several bills have been introduced that could expand the options for individuals to sue over harmful AI uses, while other proposals seek to limit the liability faced by companies providing the technology.
One proposal gaining attention, The Taylor Swift Act, would give residents of Missouri the right to sue if AI-generated sexual images of them are distributed or published without their written consent. If the person depicted is a minor, a legal guardian would be able to sue even if the depiction is not sexual in nature. Seven lawmakers have made similar proposals and intend to consolidate them into a single bill moving forward.
Other bills under consideration seek to limit liability. For example, one proposal sponsored by Republican state Representatives Scott Miller and Phil Amato would shield companies from criminal liability if they comply with the AI Risk Management Framework developed by the National Institute of Standards and Technology. Another bill would limit potential damages in certain AI-related lawsuits by setting a cap of $10,000.
Implications for brokers and their clients:
- Review tech E&O insurance to ensure AI-specific faults are included.
- Review media liability coverage to ensure it addresses claims related to deepfakes.
- Partner with insurers that have expertise in state-specific AI laws.
Source: Missouri Independent (February 19, 2026). Bills targeting deepfakes, AI liability spark debate among Missouri lawmakers.
Emerging insurance industries mentioned: Artificial Intelligence Insurance.
US Cannabis Companies Face a New Wave of Regulation and Legal Risk in 2026
As 2026 progresses, US-based cannabis companies should anticipate change in several areas:
- Stricter regulation of intoxicating hemp and THCA products. Many popular hemp-derived cannabinoids (e.g., delta-8, delta-10, THCA products) may fall outside the hemp definition and be treated as federally illegal.
- Trademark and Intellectual Property protection. As cannabis is still federally illegal, federal trademark protection for products is generally unavailable. Businesses have to rely on state registrations, common law rights, and protections for ancillary goods.
- Growing data privacy and cybersecurity risks. Cannabis operators collect sensitive consumer data (especially in the medical market), making them prime targets for cyberattacks. Compliance is complicated by a patchwork of state privacy laws.
- Complex advertising and marketing compliance. Marketing cannabis products requires navigating both state advertising rules and platform restrictions. Companies must ensure campaigns comply with strict rules regarding youth exposure and health claims.
- Public health-driven regulatory tightening. Regulators are increasingly focusing on health impacts such as youth use and impaired driving. This may lead to stricter rules on THC potency, product testing, labeling, warnings, and packaging, requiring companies to adapt product formulations and marketing strategies.
Implications for brokers and their clients:
- As regulators tighten scrutiny on THC potency, labeling, and product safety, cannabis companies should investigate product liability coverage tailored to the sector.
- Investigate robust cyber liability insurance to cover breach response costs, regulatory investigations, and potential lawsuits following an incident.
- Since federal trademark protections remain limited, investigate cannabis insurance policies that include IP defense and advertising liability to help manage disputes over alleged infringement.
Source: ArentFox Schiff (February 17, 2026). Top Issues for the Cannabis Industry in 2026.
Emerging insurance industries mentioned: Cannabis Insurance.
The Hidden Liability in AI Systems That Firms Can’t Debug
Computer Science researcher, Neel Somani, argues that the biggest risk for AI systems isn’t misuse, but the risk of failure during everyday operation while organizations lack the means to address those failures.
The challenge stems from debuggability, since AI models often operate as opaque systems whose internal decision-making is difficult to inspect. Somani states that companies deploying AI need to be able to debug with the same level of rigor applied to safety-critical software; otherwise, they’re relying on systems they don’t fully control.
Attempts to explain model output may sound logical but those explanations often fail to reflect the model’s real reasoning and can break down when inputs change slightly.
This disconnects between the system’s explanation, and its actual mechanism creates two key risks. First, hidden failure modes can emerge because the internal logic is unclear, allowing problems to surface in unexpected situations that testing didn’t capture. Second, fixing issues can be unpredictable; modifying one part of a model may trigger new problems elsewhere.
As a result, organizations relying on AI for critical decisions face operational risk if they can’t reliably identify causes of errors, safely adjust the system, and confirm that fixes work.
Somani argues that governance alone can’t solve this problem. It helps manage risk but doesn’t address the core issue if organizations lack the technical ability to diagnose and correct failures.
Improving debuggability will likely require technical approaches used in safety-critical systems like aircraft controls and cryptographic protocols. Emerging research suggests these methods could potentially be applied to other AI systems. Organizations must decide whether to wait for these tools to mature or start building the technical capacity to better understand and control their systems now.
Implications for brokers and their clients:
- Investigate business interruption insurance to help offset revenue losses and recovery costs if AI-driven systems malfunction or require shutdown for remediation.
- Secure robust tech E&O coverage to address claims arising from incorrect outputs, flawed automated decisions, or operational failures caused by opaque model behavior.
- Investigate tailored D&O coverage to help protect leadership from regulatory investigations or shareholder claims related to AI risk management.
Source: Space Coast Daily (February 6, 2026). Neel Somani: The Hidden Liability in Every AI Deployment.
Emerging insurance industries mentioned: Artificial Intelligence Insurance.
AI Agents Controlling Crypto: A Recipe for Disaster?
AI agents are being given control over crypto wallets, enabling them to hold assets, pay for services, execute trades, and hire other agents. The situation where wallets aren’t directly under human control raises new questions about liability, and legal frameworks lag behind technology.
To quote Avichal Garg of Electric Capital during a NEARCON 2026 panel, “What happens if there’s not a human behind it at all? It’s some piece of code that owns a wallet, executing code to make more money… How does liability work in that case? I actually don’t know.”
Implications for brokers and their clients:
- Investigate specialized cyber and digital asset coverage to protect against losses from exploits, unauthorized transactions, and compromised smart contract interactions.
- Secure robust tech E&O coverage that addresses claims arising from unintended transactions and financial losses caused by agent decision-making.
- As the legal framework around autonomous agents controlling assets is still evolving, secure D&O coverage to protect executives and boards from regulatory investigations or shareholder claims tied to governance of AI-driven financial systems.
Source: CoinDesk (February 24, 2026). Crypto Wallets for AI Agents Are Creating a New Legal Frontier, Says Electric Capital.
Emerging insurance industries mentioned:
Digital Asset and Web3 Insurance.
Artificial Intelligence Insurance.
Finland Examines the Limits of AI in Preventing Gambling Harm
An ongoing consultation within Finland’s Ministry of Social Affairs and Health’s Gambling Harm Risk and Harm Assessment Group shows growing debate over how responsibility for preventing gambling harm should be allocated to licensed operators.
One area of debate is the role of AI in regulatory compliance. Some stakeholders argue that AI systems shouldn’t be relied on too heavily to satisfy statutory compliance obligations, but that operators should still be encouraged to develop and implement these tools, provided their methodologies are transparently documented and reported to regulators.
Others have raised concerns that the proposals don’t sufficiently define operators’ duty of care toward players. They warn that leaving decisions about identifying and addressing harmful gambling largely to individual companies could weaken accountability, given operators’ commercial incentives, and they call for more centralized oversight.
Implications for brokers and their clients:
- As regulators clarify operators’ duty of care and oversight requirements, ensure gambling insurance policies include coverage for regulatory investigations and enforcement actions to offset legal costs tied to compliance reviews or alleged failures to prevent harm.
- Operators using AI tools to detect harmful gambling behavior should review tech E&O coverage to ensure it addresses claims that monitoring systems failed to identify or appropriately intervene in cases of problem gambling.
- Secure media liability coverage to address claims related to misleading promotions, inadequate warnings, or advertising that allegedly target vulnerable players.
Source: iGaming Expert (February 18, 2026). Finland Gambling Regulation Debate Includes AI Role.
Emerging insurance industries mentioned: Gambling Insurance.