Article

Risk Wrap 040: Illicit Crypto, Deepfake Fraud, Misstatements in AI, Consumer Reporting Violations, Moroccan Cannabis Sector, and New Uzbek AI Regulation

HOW USEFUL WAS THIS POST? RATE, LEAVE A COMMENT REQUESTING CHANGES, AND WE’LL AMEND ACCORDINGLY.

From money laundering to new AI law, this edition of Risk Wrap highlights six developments shaping compliance, governance, and insurance exposure across high-risk industries.

2025 Marked a Sharp Rise in Crypto Flows to Human Trafficking Networks

New data from Chainalysis shows that in 2025, cryptocurrency flows to suspected human trafficking services grew by 85%. The majority of services are based in Southeast Asia and inflows have reached a scale of hundreds of millions across identified services.

The analysis tracked four main categories of suspected crypto-facilitated human trafficking:

  • Telegram-based ‘international escort’ services.
  • Telegram-based ‘labor placement’ agents that enable kidnapping and forced labor for scam compounds.
  • Prostitution networks.
  • Child sexual abuse material (CSAM) vendors, which are networks of individuals engaged in the production and dissemination of CSAM.

 

Inflows to human trafficking services by asset type 2025
Source: Chainalysis (February 12, 2026).

48.8% of Telegram-based international escort service transactions exceeded $10,000, indicating wide-scale professionalized operations. 59.8% of prostitution network transactions are between $1000-$10,000, suggesting possible agency-level operations. Lower costs for CSAM transactions may be due to the prevalence of subscription services. CSAM networks increasingly use Monero for laundering proceedings.

Distribution of USD Amount by Types of Service
Source: Chainalysis (February 12, 2026).

Telegram-based services are deeply integrated with Chinese-language money laundering networks and guarantee platforms.

Telegram-based 'international escort' services sending behavior 2022-2025
Source: Chainalysis (February 12, 2026).

The adoption of cryptocurrency internationally has enabled these services to broaden their reach and has contributed to the development of sophisticated global operations. The chart below shows the intensity of flows from different regions.

Crypto flows to East and Southeast Asian human traffiking networks
Source: Chainalysis (February 12, 2026).

Blockchain enables powerful tools for detection and prevention. Crypto firms are advised to look out for:

  • Frequent large payments to labor placement services paired with cross-border transactions.
  • Concentrated flows to regions associated with trafficking.
  • Frequent stablecoin conversion patterns.
  • High-volume transactions through guaranteed platforms.
  • Wallet clusters exhibiting activity across multiple illicit service categories.
  • Links to Telegram-based recruitment channels.

Implications for brokers and their clients:

  • Secure D&O insurance that responds to regulatory inquiries, enforcement actions, and shareholder claims alleging failures in AML/KYC governance.
  • Ensure professional liability policies cover claims around failing to meet AML/KYC obligations, including negligent transaction monitoring, SAR failures, or onboarding deficiencies.
  • Review crime insurance policies to ensure they are explicitly tailored to the exposures affecting crypto firms.

Source: Chainalysis (February 12, 2026). Cryptocurrency Flows to Suspected Human Trafficking Services Surge 85% Year-over-Year.

Emerging insurance industries mentioned:

Digital asset and web3 insurance.

Digital asset crime insurance.

 

Deepfake Fraud Is Becoming a Systemic Financial Risk

Deepfake-enabled fraud is already widespread and is expected to accelerate as access to technology improves. In one notable case in Hong Kong, a finance employee was tricked into authorizing 15 transfers totaling nearly $25 million after participating in what appeared to be a routine video call with the company’s CFO and several coworkers. In reality, none of those individuals were present. Their images and voices were entirely AI-generated.

In a Deloitte survey, 25.9% of executives reported that their organizations had experienced at least one deepfake incident. Other research indicates that 92% of companies have experienced economic loss due to deepfakes.

Regulators and courts are increasingly signaling that institutions may be held accountable for failing to prevent or mitigate this type of fraud. Financial firms are expected to deploy stronger controls and verification measures.

In future, more sophisticated and large-scale deepfake scams could distort market perception and create securities law risks. Deepfake advertisements may trigger consumer protection litigation and the use of AI-generated voices in automated phone calls could expose companies to lawsuits under the U.S. Telephone Consumer Protection Act.

To reduce exposure, companies are being urged to strengthen internal controls, tighten authentication processes, educate employees and customers, restrict official communication channels, and proactively detect and respond to fraudulent practices.

Implications for brokers and their clients:

  • Review cyber and crime insurance policies to confirm coverage for social engineering, funds transfer fraud, and losses caused by AI-enabled impersonation.
  • Review whether professional liability policies adequately address regulatory investigations and consumer litigation arising from deepfake-related incidents.
  • Confirm that coverage extends to acts committed through third-party vendors that deploy AI tools.

Source: J. Randall Boyer. (January 21, 2026). Corporate Fraud and Institutional Liability in the Age of Deepfakes.

Emerging insurance industries mentioned: Artificial intelligence insurance.

 

Oracle Faces Investor Lawsuits as AI Spending and Revenue Claims Come Under Scrutiny

Skepticism around AI investment continues as Oracle faces multiple class action lawsuits, which allege that it misled investors about its AI infrastructure strategy and related capital expenditures.

Plaintiffs claim that Oracle painted an overly optimistic view of how its significant investments in AI-ready data centers would translate into near-term revenue growth. There are concerns that its heavy capital expenditure exposed the company to increased risk surrounding cash flow and balance sheet strength, and there has been delayed delivery on some high-profile contracts.

Implications for brokers and their clients:

  • Given the current market sentiment, AI companies should review limits on directors’ and officers’ insurance policies to account for heightened securities litigation risk.
  • D&O policy language should be reviewed to ensure no exclusions apply to claims arising from AI-related statements.
  • Investigate tailored representation and warranties insurance when AI-related disclosures may affect transactions.

Source: Yahoo Finance. (February 16, 2026). Oracle Lawsuits And Air Force Win Pull AI Cloud Story In Tension.

Lines of business mentioned: Directors and officers insurance.

 

AI Hiring Tools Face Legal Test Under Consumer Reporting Laws

A proposed class action lawsuit filed in California on January 20, 2026, alleges that Eightfold AI Inc.’s applicant screening tool may violate the federal Fair Credit Reporting Act (FCRA) and California’s Investigative Consumer Reporting Agencies Act (ICRAA) by compiling and selling detailed profiles on job applicants without proper consent or disclosure.

The complaint states that the tool collects extensive personal information on candidates and uses an LLM to evaluate and rank their suitability. Sources include social media profiles, publications, job application history, location data, and device tracking data.

Plaintiffs argue that the tool may be operating outside the legal framework designed to protect applicants. For example, the FCRA imposes adverse action notice requirements where applicants must be given the opportunity to correct inaccurate information before ‘adverse’ decisions are made (like being rejected for a position).

According to a recent LinkedIn study, 93% of recruiters plan to increase their use of AI in 2026 and 59% said it’s already helping them with candidate discovery. 66% intend to increase their use of AI for pre-screening in 2026.

Companies that use automated decision-making involving applicant or employee-specific data may wish to review which data is collected and monitor this lawsuit, which is still in its early stages.

Implications for brokers and their clients:

  • Providers of similar technologies should review whether tech E&O insurance explicitly coversclaims relating to data-use laws applicable to AI systems.
  • Users of AI recruitment tools (or any tools where AI makes decisions based on personal data) should review whether third-party liability explicitly responds to allegations of algorithmic bias or discriminatory outcomes.
  • Review media liability insurance to confirm coverage for claims alleging that AI-generated applicant profiles, rankings, or summaries contain false, misleading, or defamatory information about individuals.

Source: Stephen Woods & Zachary Zagger. (February 13, 2026). Groundbreaking Lawsuit Tests Whether AI Hiring Tools Trigger FCRA Compliance.

Emerging insurance industries mentioned: Artificial intelligence insurance.

Lines of business: Media errors and omissions.

 

Morocco’s Medical Cannabis Industry Gains Momentum

A recent study indicates that Morocco is well-positioned to become a significant producer of medical cannabis-based medicines, thanks to a combination of a clear legal framework, growing scientific expertise, and an established pharmaceutical industry.

Researchers cited the 2025 entry of the first Moroccan CBD-based medicine for drug-resistant epilepsy as a key milestone in moving from regulatory groundwork into actual pharmaceutical production.

Morocco’s legal framework (based on Law 13-21, which authorizes cannabis cultivation for medical, pharmaceutical, and industrial use) has enabled licensed cultivation, processing, and further research.

The study also notes that by the late 2020s, Morocco aims to capture 10-15% of the European medical cannabis market, potentially generating between $420 million to $620 million per year.

Implications for brokers and their clients:

  • Cannabis producers should maintain robust product liability insurance that explicitly addresses risks tied to therapeutic efficacy, adverse reactions, and compliance with evolving national and international standards.
  • Investigate policies that protect against claims arising from alleged breaches of licensing, traceability, THC content regulations, and export control requirements imposed under evolving Moroccan law and ANRAC oversight.
  • Given Morocco’s focus on medical cannabis exports, companies should secure cargo insurance tailored to risks in international distribution, including regulatory detentions, shipment delays, and cross-border compliance challenges.

Source: Hespress English. (February 2, 2026). Study: Morocco has strong potential to build medical cannabis drug industry.

Emerging insurance industries mentioned: Cannabis insurance.

 

Uzbekistan Introduces Its First Legal Framework Governing AI

Uzbekistan has enacted its first set of AI-specific legal amendments by updating the Law ‘On Informatization’ and the Administrative Liability Code to regulate the use of AI technologies.

The amendments took effect on January 21, 2026. Key changes include:

  • The legal definition of AI: AI is defined as ‘a set of technological solutions that makes it possible to imitate human cognitive functions (including self-learning and decision-making) and to obtain, when performing specific tasks, results comparable to the results of human intellectual activity.’
  • General rules for AI use: AI systems must not harm individuals or violate fundamental rights, and legally significant decisions affecting rights and freedoms can’t be based solely on AI outputs. This effectively mandates human oversight.
  • Administrative offenses for AI misuse: Unlawful processing or dissemination of personal data using AI is now an administrative offence, punishable by fines and potential confiscation of ‘items’ used in the violation.

Organizations using or developing AI in Uzbekistan should update compliance, governance, and contracting practices to align with the new requirements and prepare for further guidance.

Implications for brokers and their clients:

  • Investigate tailored AI coverage from insurers with expertise in regulation applicable to the region.
  • Review cyber insurance policies to ensure they covers claims, investigations, and fines (where insurable) arising from data breaches.
  • Review tech E&O insurance to confirm that it responds to allegations that AI systems produce unlawful, inaccurate, or non-compliant outputs, especially where AI-driven decisions impact individuals’ rights and are alleged to violate new human oversight requirements.

Source: Dentons. (January 29, 2026). Uzbekistan adopts first AI-focused amendments to information and administrative laws.

Emerging insurance industries mentioned: Artificial intelligence insurance.

How useful was this post?

Click on a star to rate it!

What can we improve?

More you might enjoy…

Scroll

View All