News imagery

Article

Risk Wrap 036: Professional Laundering, AI Risks, Space Law Gaps, Fintech Risks, Developer Liability, and New AI Chatbot Obligations

HOW USEFUL WAS THIS POST? RATE, LEAVE A COMMENT REQUESTING CHANGES, AND WE’LL AMEND ACCORDINGLY.

From crypto laundering to new AI regulation, this edition of Risk Wrap highlights six developments shaping compliance, governance, and insurance exposure across high-risk industries.

 

Crypto Crime Broke Records in 2025 as Nation States and Professional Launderers Scaled Up

In 2025, illicit cryptocurrency activity reached an all-time high. Wallets tied to criminal activity received at least $154 billion, a 162% increase from the year before. This uptick was largely driven by nation-states using crypto ecosystems to evade international sanctions, as shown below. However, the crypto crime ecosystem has become highly professionalized on the whole, supporting transnational criminal networks.

Source: Cyber Security News (January 12, 2026).
Source: Cyber Security News (January 12, 2026).

Aside from direct nation-state participation, North Korean-linked hackers stole over $2 billion in 2025, while Iranian proxy networks enabled the laundering of $2 billion. Chinese networks became dominant players offering laundering-as-a-service capabilities.

Stablecoins now make up 84% of illicit transaction volumes. Their cross-border transferability, lower volatility, and broader utility across trading platforms make them an appealing asset for illicit actors.

Implications for brokers and their clients:

  • Investigate comprehensive cyber insurance and crime coverage that responds to blockchain-specific risks, including hot wallet hacks, private key theft, and smart contract exploits.
  • Crypto exchanges should consider dedicated crypto custody insurance to protect customer funds from theft, insider fraud, and third-party failures.
  • With regulators increasing scrutiny over illicit crypto flows, firms should evaluate whether their policies should cover investigations, customer lawsuits, and compliance failures tied to KYC, AML, and transaction-monitoring breakdowns.

Source: Cyber Security News (January 12, 2026). Cybercriminal Cryptocurrency Transactions Peaked in 2025 Following Nation‑State Sanctions Evasion Moves.

 

The Rise of Deepfakes and Agents: What’s on the Horizon for AI Liability?

Legal liability surrounding AI continues to move from theoretical debate to courtroom scrutiny. Major copyright litigation is entering critical phases and may influence how training data use is judged, prompting companies to audit their gen AI practices for input and output risks.

A key emerging concern is agentic AI. Courts are yet to definitively allocate responsibility for the conduct of fully autonomous agents. Organizations have been advised to review vendor agreements and strengthen indemnification clauses to address autonomous actions and hallucinations that lead to financial loss.

Deepfakes and synthesized likeness are another critical issue. For example, AI voice spoofing threatens banks and insurance companies with heightened risk of imposter fraud.

New regulations are emerging while the EU AI Act continues in its phased implementation. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) is effective as of January 1, 2026, while the Colorado AI Act is set to come into force in June 2026.

Implications for brokers and their clients:

  • Review IP liability coverage to ensure it addresses claims linked to AI training and alleged infringement.
  • Review whether tech E&O policies explicitly respond to losses caused by autonomous or agentic AI actions, including hallucinations or unsupervised decision-making that result in financial harm to customers or counterparties.
  • Crime and social engineering coverage enhancements may be needed to address AI-enabled impersonation risks like deepfake voice or likeness fraud, which sit outside traditional cyber breach scenarios.

Source: JD Supra (January 8, 2026). 2026 AI Legal Forecast: From Innovation to Compliance.

 

Could Other Nations Steal NASA’s Mars Samples?

NASA’s Perseverance rover continues to await its return from Mars along with its sample tubes. The mission’s suspension reveals untested legal terrain, raising questions about the ownership of the samples themselves.

Under Article VIII of the 1967 Outer Space Treaty, the launching state retains jurisdiction and control over objects it launches, wherever they’re located. The Rescue and Return Agreement obliges any actor that recovers objects to return them to the launching state on request and states that possession does not confer ownership.

Ambiguity arises when we consider whether the samples are defined as a property. The Rescue and Return agreement addresses space objects. It does not explicitly address extracted materials. It could be argued that if another state retrieved the samples, they would only be obligated to return the containers.

The US Commercial Space Launch Competitiveness Act of 2015 makes it clear that space resources like sample material are US property. International law remains ambiguous, leaving a gap in how future recovery and ownership disputes might be resolved.

Implications for brokers and their clients:

  • Investigate bespoke coverage for operations involving resource extraction.
  • Partners with insurers that have expertise in space law and the latest regulatory changes across jurisdictions.
  • Review whether policies address mission delays, interruptions, or liability arising from regulatory ambiguity.

Source: Payload Space. (January 13, 2026). Op-ed: Mars Sample Return May Be Canceled, But the Legal Questions It Leaves Behind Continue.

 

Financial Firms Flag AI as a Growing Operational and Governance Threat

DTCC’s annual Systemic Risk Barometer survey captures risk priorities among global financial firms. This year, firms are increasingly uneasy about the convergence of AI and fintech. Respondents pointed to cybersecurity and data protection vulnerabilities as the leading risk associated with AI adoption, with 41% identifying this as their primary AI-related worry.

Many firms are experimenting with gen AI in areas like trading, risk analysis, and client communications. 38% considered AI-generated misinformation as a concern and cited worries about inaccurate or false outputs influencing decision-making or misleading clients and regulators. 37% are concerned about insufficient governance, and 34% about overreliance on AI in critical processes.

Beyond AI, 63% of firms are concerned about cybersecurity in general. The survey also asked about quantum computing risk. Only 29% of organizations are actively planning for cybersecurity threats linked to quantum technologies, while 25% recognize the risk but do not have plans to address it yet.

Implications for brokers and their clients:

  • Fintechs should investigate whether they need policies to explicitly cover losses, claims, and regulatory investigations arising from AI errors, data leaks, model failures, or automated decision-making that have gone wrong.
  • As regulators scrutinize how firms deploy AI, manage data, and oversee automated systems, fintechs may need insurance that responds to investigations, shareholder claims, and customer lawsuits tied to technology-driven compliance breakdowns.
  • Looking ahead, fintechs should consider business interruption insurance to absorb large-scale encryption failures or market-wide technology disruptions.

Source: SecurityBrief New Zealand (January 9, 2026). Geopolitics, cyber & AI top DTCC systemic risk survey.

 

Will Blockchain Developers Finally Be Off the Hook?

The US Senate has seen the reintroduction of a bipartisan bill (dubbed the Blockchain Regulatory Certainty Act), which aims to clarify when developers and infrastructure providers in the crypto sector can be classified as money transmitters.

The proposal would draw a clearer line between “non-controlling” developers and infrastructure providers who simply write or maintain blockchain software and financial intermediaries that have control over user assets.

Senator Ron Wyden has been quoted as saying that “forcing developers who write code to follow the same rules as exchanges or brokers is technologically illiterate and a recipe for violating Americans’ privacy and free speech rights.”

The bill is part of a wider push for regulatory clarity surrounding crypto after several high-profile cases shed light on the issue.

Implications for brokers and their clients:

  • Investigate insurance that covers legal defense costs, investigations, and regulatory actions brought by agencies like the SEC or CFTC.
  • Investigate tech E&O insurance that specifically covers smart contract vulnerabilities and coding errors.
  • Maintain robust cyber liability and crime policies that cover losses from hacks, breaches, and asset control incidents.

Source: Decrypt (January 13, 2026). Bipartisan Senate Bill Seeks Clarity on Crypto Developer Liability Under Federal Law.

 

NY and CA Crack Down on AI Companion Chatbots to Prevent User Harm

New York and California have recently enacted laws targeting AI companion chatbots, systems designed to engage users in sustained, human-like interactions that can create emotional attachment. They are often used for emotional or mental health support.

New York’s AI Companion Models statute (effective November 5, 2025) and California’s SB 243 (effective January 1, 2026) impose obligations on operators of these chatbots to reduce the risk of user harm, especially to minors.

Under both laws, operators must provide clear disclosures that users are interacting with AI rather than humans. California adds further obligations for minors, including frequent reminders of the chatbot’s artificial nature and safeguards against harmful content.

New York’s attorney general has the power to seek injunctions and civil penalties for violations. California’s law provides a private right of action enabling individuals to sue non-compliant operators for actual damages or statutory amounts per violation, as well as attorneys’ fees.

Implications for brokers and their clients:

  • Given heightened scrutiny, investigate policies that cover the costs of legal defense and investigations.
  • Review product liability coverage to ensure it responds to claims alleging emotional or psychological harm arising from AI companion interactions, including failure to implement mandated safeguards or disclosures.
  • Review tech E&O policies to ensure they respond in cases where failure to present user warnings is the result of a technical fault.

Source: JD Supra (January 9, 2026). Analyzing the New AI Companion Chatbot Laws.

How useful was this post?

Click on a star to rate it!

What can we improve?

More you might enjoy…

Scroll

View All