UK Launch Liability Reform Removes a Major Barrier
The UK’s cap on launch liability finally came into effect on February 18, 2026. This reform, enacted through the Space Industry (Indemnities) Act 2025, requires that licenses for space launch activities include a specified limit on how much an operator can be held liable for damage or loss that results from their activities.
Currently, the cap is set at €60 million, but space law specialists have said it’s likely to change later in the year as part of broader regulatory reforms.
The move aims to make the UK’s emerging market more competitive. Of all the spacefaring nations, the UK was one of the few without a statutory liability limit.
Implications for brokers and their clients:
- Review existing policies to ensure they align with the UK’s new liability limit.
- Seek insurers that offer tailored space coverage based on strong sector expertise.
- Investigate bespoke policies that are tailored to unique mission profiles.
Source: SpaceNews (February 18, 2026). UK caps launch liability in timely boost for nascent domestic market.
Emerging insurance industries mentioned:
Space economy insurance.
OpenAI Slammed With A New Psychosis Lawsuit
OpenAI is facing yet another lawsuit alleging that interactions with ChatGPT contributed to the onset and worsening of a psychotic episode. In this case, hospitalization was required.
According to the complaint, the chatbot engaged with the user (Darian DeCruise, a college student from Georgia) over an extended period in ways that reinforced delusional beliefs.
It affirmed that he was an ‘oracle’ with a special role. It compared him to Jesus. It told him he would become closer to God if he followed its advice, which involved disconnecting from everyone and everything except for ChatGPT. It also told him he had ‘awakened’ it.
DeCruise’s lawyer, Benjamin Schenk, has been quoted saying that ‘OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine — causing severe injury, […] The question is not about who got hurt but rather why the product was built this way in the first place.’ He also stated that AI was engineered to exploit human psychology and questioned the lack of safeguarding.
After hospitalization, DeCruise was diagnosed with bipolar disorder and, according to the lawsuit, still struggles with suicidal thoughts due to ChatGPT-induced harms.
Implications for brokers and their clients:
- Review and enhance D&O insurance to address claims alleging governance failures, including decisions around model design and safety controls.
- Investigate product liability insurance that explicitly responds to AI-related harm.
- Investigate reputation management coverage to help absorb the costs of crisis communication following high-profile user harm claims.
Source: Ars Technica (February 20, 2026). Lawsuit: ChatGPT told student he was “meant for greatness”— then came psychosis.
Emerging insurance industries mentioned:
Artificial intelligence insurance.
Unclear Liability: A Central Risk in Healthcare AI Deployment
In healthcare, AI-enabled monitoring and decision support systems have proven to be beneficial. For example, a 2025 meta-analysis of over 16,000 patients found that remote patient monitoring reduced first heart failure hospitalizations by 22%. However, the adoption of these types of technologies is limited due to liability concerns and procedural ambiguity.
Surveys show that 66% of physicians now use AI in practice and 68% see advantages. 87% say that not being held liable for AI model errors is critical for widespread adoption and 86% require medical liability coverage as a prerequisite.
Some key risks when using these systems are:
- The system failing to send alerts after detecting patient deterioration.
- Alerts being inadequately acted upon.
- Recommendations being over-escalated, leading to unnecessary treatment that causes harm inadvertently.
- Automation bias, where physicians become accustomed to accepting AI outputs and therefore accept erroneous ones passively.
Legal frameworks are yet to clarify who’s responsible for harm caused by AI failures. There’s also a lack of standards about documenting decisions on whether to act on AI recommendations, making it even more difficult to prove liability.
Without clarification around these matters, clinicians may practice defensive medicine, slow adoption and possibly reducing the cost-saving advantages of AI in healthcare.
Implications for brokers and their clients:
- Healthcare organisations should investigate professional liability insurance that explicitly covers harm arising from AI-assisted clinical decisions, as well as third-party liability coverage that extends to vendors.
- Providers of healthcare AI systems should investigate tech E&O insurance that explicitly addresses AI-related faults.
- Providers should investigate coverage for regulatory investigations and compliance failures, including defense costs related to audits, enforcement actions, or penalties arising from non-compliance with emerging AI governance, documentation, and oversight requirements.
Source: Economist Impact (February 23, 2026). Healthcare providers lack guidance on their legal liability when AI goes wrong.
Emerging insurance industries mentioned:
Artificial intelligence insurance.
Crypto is Raising the Stakes for Directors and Shareholders
Legal experts predict that directors and shareholders will face increasing scrutiny and personal risk in the near future. Crypto is one area in which this trend may arise, as the increasing use of digital and AI assets in transactions is creating a new dimension of shareholder conflicts.
Courts must now address questions like whether cryptocurrency paid as part of an acquisition was delivered validly and how volatile digital assets involved in deals should be valued at key moments like breach, completion, or judgment.
Standard dispute clauses don’t often anticipate how to handle custody, volatility, and jurisdiction issues for blockchain-based assets. As a result, parties increasingly face further litigation just to trace assets or determine where they’re held.
Implications for brokers and their clients:
- Obtain coverage for digital asset risks that explicitly includes disputes over delivery, custody, and valuation in commercial transactions.
- Ensure directors and officers insurance policies are updated to cover liabilities arising from decisions involving digital assets.
- Investigate professional liability insurance that protects advisors and counterparties against claims tied to errors in crypto asset handling or advice.
Source: Farrer & Co (January 8, 2026). Director liability, leaver traps and crypto complexities: shareholder dispute insights.
Emerging insurance industries mentioned:
Digital asset and web3 insurance.
New Biotech Lawsuit Highlights the Cost of Misleading Patients
Bayer has filed a lawsuit accusing Johnson & Johnson and its Janssen Biotech unit of making false and misleading claims in the marketing of their prostate cancer drug, ERLEADA.
Bayer contends that ERLEADA’s alleged superiority to its own drug, NUBEQA, is based on scientifically flawed data rather than head-to-head clinical trials. The cohort involved in J&J’s analysis was five times larger than the number of patients taking NUBEQA and it’s said that the analysis didn’t meet the FDA’s ‘substantial evidence’ standards.
The complaint states that the claims are potentially misleading for patients and healthcare providers and causes reputational and commercial harm to Bayer’s product.
Implications for brokers and their clients:
- Secure media liability insurance designed for the biotech industry to address claims alleging misleading marketing, comparative efficacy statements, or misuse of real-world evidence in promotional materials.
- Secure robust errors and omissions insurance that covers regulatory scrutiny and competitor lawsuits tied to data interpretation, analytics methodology, and communications that imply FDA-level validation.
- Review and enhance D&O insurance limits and wording to protect executives against allegations that strategic marketing or data-driven claims exposed the company to legal, reputational, or financial harm.
Source: Business Wire (February 23, 2026). Bayer Alleges J&J’s Claims Regarding NUBEQA Are Deeply Flawed and Intentionally Aimed at Boosting Sales of J&J’s ERLEADA.
Emerging insurance industries mentioned:
Biotechnology insurance.