December 19, 2025

Significant Drug & Device Developments of 2025

Drug & Device Companies Need to Be Vigilant and Nimble in Adapting to These Shifts

As we welcome the new year, it is time to reflect on some of the most significant legal developments in the drug and device space in 2025.

At a Glance

  • This year brought a change in the administration, a major shakeup in the regulatory landscape, and a growing skepticism in science and regulatory entities. Practitioners should start thinking about how to present scientific evidence to increasingly skeptical juries in 2026 as the old norms may no longer apply.
  • The EU’s new Product Liability Directive marks a major overhaul in EU product liability law. Once it is fully adopted by EU member states, with a deadline of December 6, 2026, it will largely make it easier for claimants to bring claims against manufacturers. Further, its expanded definition of “product” to include software/technology will expose a new industry to legal risks and potential liability.
  • The use of AI in medical devices is new and increasing. Device manufacturers should remain aware of how their use of generative AI is being used and stay updated as the regulatory and litigation environment continues to take shape and evolve.
  • The Fifth Circuit’s decisions in Ruffin v. BP and Williams v. BP reaffirm that plaintiffs in toxic tort cases must independently prove both general and specific causation, as failure to substantiate either defeats their claims. Further, Ruffin introduced a shift from the traditional “threshold dose” requirement for general causation, allowing qualitative evidence like epidemiological studies to suffice.

1. Navigating a New Skepticism in Science 

Not long ago, the average American likely could not name the U.S. secretary of health and human services. Yet, following this year’s change in administration and major shakeup in the regulatory landscape, skepticism in science has become the elephant in the room for anyone working in the drug and device sphere. Practitioners should start thinking about how to present scientific evidence to juries in 2026 as the old norms may no longer apply. 

Over the past few years, attitudes toward science and regulatory agencies have shifted dramatically. It used to be the case that when a product was approved or cleared by the FDA or the EPA, juries accepted it as a sign of both safety and reliability. Now, however, such assertions do not necessarily pass muster. 

The COVID-19 pandemic initially put scientific institutions under an unprecedented microscope, with a roller-coaster response; and the recent change in the federal administration has only amplified public scrutiny amid intense polarization. For instance, Verdict Insight Partners, a jury consulting firm, surveyed mock jurors this year and found skepticism about the FDA’s staffing and expertise, especially after the news of mass layoffs at the start of the current administration.1 According to their survey, 39.6% of respondents report their view of the FDA has become more negative in the second Trump administration, while only 6.6% of respondents report a more positive view.2 These numbers are even more apparent when broken down by political party, with Democrats showing significantly more distrust and Republicans somewhat more positive, only further highlighting a deep partisan divide in attitudes toward regulatory science.3 From colleague anecdotes, the partisan divide also seems to be a less reliable barometer than it has been historically, with fluctuating views on vaccines, weight loss drugs, and even Tylenol, a longtime medicine cabinet staple, within the otherwise “typical” partisan view.

The reality for practitioners is that heading into 2026, trust in science is no longer a given. Both plaintiff and defense attorneys in the drug and device space will need to start thinking of creative ways to persuade increasingly skeptical jurors. Attorneys have already started shifting toward distancing themselves from federal agencies, relying more heavily on international regulations, and spending more time educating jurors on the foundations of scientific evidence.4

Either way, there is no sign that the elephant in the room is leaving any time soon; so those who learn to address a growing skepticism in science and regulatory institutions head-on will be the best positioned for success in 2026 and beyond.

2. The European Union’s New Product Liability Directive 

The European Union’s (EU) new Product Liability Directive (PLD), adopted by the European Council on October 10, 2024, will overhaul product liability law in the EU’s 27 member states. The directive, which member states are required to implement into national law by December 6, 2026, brings significant changes to the EU’s product liability landscape.5 Among other things, the PLD introduces new presumptions for defectiveness and causation that make it easier for claimants to pursue claims, increases defendant disclosure and discovery obligations, expands the scope of liability for additional kinds of defendants, establishes an optional defense for manufacturers based on the limits of current scientific knowledge, and broadens what counts as a “product.”6

One of the most significant changes for manufacturers to be aware of is that new presumptions will make proving product defect and causation easier for claimants. Specifically, a product will be presumptively defective if the manufacturer fails to disclose certain information, the product does not meet safety requirements, or there is an “obvious” malfunction.7 Then, if either the damage caused is consistent with the defect in question or there are excessive difficulties for plaintiffs to prove causation, courts will also presume that the defective product caused the damage.8

These presumptions shift the burden of proof onto defendants to rebut and thus significantly reduce the burden that a plaintiff needs to meet to successfully pursue a claim. Potentially to remedy this reduced burden, the PLD does introduce a “state of the art” affirmative defense, where a manufacturer can point out that an alleged defect could not have been discovered given the scientific/technical knowledge at the time, but member states have the option to forego adopting this defense entirely.9

Further, the PLD generally creates a broader scope of liability for companies in the stream of commerce. Specifically, claimants can bring claims against not just the manufacturer of a product but also the manufacturer of the specific component at issue.10 In addition, for situations where a manufacturer is located outside the EU or the manufacturer cannot be identified, the PLD expands the options claimants have for who they can pursue claims against.11

Beyond reforms affecting traditional consumer product manufacturers, the PLD also introduces an entirely new category of covered products: software technology. Specifically, the PLD broadens the definition of “product” to include: (1) software, including artificial intelligence; (2) digital manufacturing files; and (3) digital services.12 Under the directive, therefore, developers or producers of such software are treated as manufacturers and can be liable for software updates and cybersecurity vulnerability.13 The PLD does clarify, however, that the definition of product does not extend to free and open-source software developed outside of a commercial activity.14 Either way, technology and software companies will be exposed to new legal risks and potential liability in the name of advancements in the digital age.

It’s still too soon to tell how these changes to the EU product liability landscape will materialize, but 11 EU member states have begun taking steps to adopt changes brought by the revised PLD.15 Even though these changes will not apply retroactively to products marketed before December 9, 2026, it is even more critical now that manufacturers — particularly manufacturers of software/AI, which have not previously faced product liability actions — start familiarizing themselves with this directive and begin preparing for an influx in product liability litigation. Indeed, the EU’s adoption of the Representative Action Directive (RAD), 2020/1828 in 202016 and subsequent implementation by member states has expanded judicial mechanisms for mass and class actions in the EU, including by introducing such mechanisms to jurisdictions which had little to no history of class action litigation. Increased class action filings over the past few years have been noted in multiple member states, with Portugal and the Netherlands emerging as particularly risky jurisdictions.17 Thus, the EU’s substantive expansion of product liability for manufacturers is occurring in an increasingly plaintiff-friendly environment. 

3. Using Artificial Intelligence in Medical Devices

There is an increasing interest in using generative AI for diagnosis, treatment, and management of diseases and conditions. Using AI in the delivery of health care improves access and affordability. It can quickly analyze complex medical data and identify patterns. However, as AI becomes more involved in our daily lives, the regulatory and legal landscape is quickly evolving; and device manufacturers should stay vigilant.

Currently, there are over 1,000 AI-enabled medical devices regulated by the FDA.18 Under the Federal Food, Drug, and Cosmetic Act, AI is considered a device and subject to FDA regulation if it is “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man.”19 Software functions have traditionally been excluded from this definition; however, software is now subject to regulation when it is a medical device itself or in a medical device.20 For software functions that technically meet the definition of medical device but pose low risk to the public, the FDA has stated it intends to exercise its enforcement discretion at this time.21

The FDA has published guidance related to AI-enabled medical devices throughout this year. Since generative AI algorithms change over time with the input of real-world data, the FDA has Predetermined Change Control Plans (PCCP), which help manage modifications to AI-enabled medical devices post-market by allowing manufacturers to implement changes to the AI without undergoing new FDA review.22 Additionally, post-market monitoring for unintended uses and reporting of adverse events is crucial. In January, the FDA issued guidance on specific labeling requirements for AI-enabled devices.23 It stated that manufacturers must provide: (1) a clear statement that the device uses AI, with a description of how it supports the intended use of the device; (2) detail the inputs and outputs of the model, how data is collected, and what systems it interacts with; (3) performance measures with any known risks or potential sources of bias; (4) if a PCCP is in place, how performance will be monitored and how updates will occur; and (5) for devices that are patient- or caregiver-facing, written and accessible instructions and explanations.24 Finally, with the use of AI in medical devices comes the concern about data privacy. In the FDA’s June 2025 premarket guidance, the FDA stated that manufacturers using AI in medical devices must show they have designed safeguards to protect against hacking or system failure.25 The FDA also requires a Software Bill of Materials listing all software components to enable tracking of vulnerabilities, and it expects clear labeling so that users know how to maintain a device’s security.26

Hundreds of laws have been introduced and enacted in the United States related to the use of AI, a reflection on the impact AI has already had on our daily lives.27 Importantly, in September 2025, the AI Lead Act introduced in the U.S. Senate would create a federal cause of action for product liability claims when someone is harmed by an AI system.28 Related to health care specifically, as of mid-October, 47 states introduced legislation and 21 states have passed bills enacting a total of 33 new laws across the country.29 Over 20 bills were introduced that established guardrails when using AI in clinical care. These bills included provider oversight requirements, transparency mandates, and safeguards against bias and misuse of sensitive health data. However, on December 11, 2025, President Trump signed an executive order issuing a single regulation framework for artificial intelligence and preempting state regulations.30 The executive order states that its purpose is to allow for AI companies to freely innovate and to “ensure that there is a minimally burdensome national standard — not 50 discordant State ones.”31 The executive order also calls for the U.S. attorney general to establish an AI Litigation Task Force with the sole responsibility of challenging state AI laws.32 Within 90 days of its signing, the order requires the secretary of commerce to specify conditions under which states “may be eligible to receive remaining funding under the Broadband Equity Access and Deployment, or BEAD, program, a $42.5 billion effort to expand high-speed access in rural areas.”33

The use of generative AI for mental health is an even more recent development. This year, plaintiffs’ attorneys filed multiple lawsuits accusing developers of creating chatbots that are: addictive; facilitate explicit sexual content with minors; and contribute to suicide, delusional episodes and severe psychological harm. While these chatbots are not designed with the intent to diagnose or treat in a way that they would be subject to FDA oversight, individuals were using them as such. As a result, dozens of bills related to the use of chatbots, focusing specifically on disclosure requirements, were enacted. The lawsuits have also resulted in federal agencies trying to gain feedback and insights on how to address the regulation of generative AI. In September, the FTC announced it was launching an enforcement inquiry into AI companion chatbots and it also issued a request for public comment on measuring and evaluating the performance of AI-enabled medical devices.34 In November, the FDA held a Digital Health Advisory Committee meeting to gain insight and feedback on how it should regulate generative AI-enabled digital mental health medical devices.35

With the growing need for accessible and affordable health care options and the increasing use of AI in our daily lives, medical device developers and manufacturers should remain aware of the evolving changes to the legal and regulatory landscape.

4. Fifth Circuit Clarifies Toxic Tort Causation Standard in Two Deepwater Horizon BELO Litigation Cases

In 2025, the Fifth Circuit handed down a pair of decisions in the Deepwater Horizon BELO LitigationRuffin v. BP and Williams v. BP — that reinforce a foundational principle in toxic tort law: plaintiffs must prove both general and specific causation, and each element is just as important as the other.36 The Ruffin case also offered a slightly new approach to the quantitative threshold-dose analysis that practitioners are likely used to implementing. 

In Ruffin, the plaintiff alleged that chemical exposure from his job cleaning up after the Deepwater Horizon oil spill caused his prostate cancer.37 Although the Fifth Circuit affirmed the district court’s determination that the plaintiff’s expert opinion was inadmissible, the court dismissed the defense’s argument that the expert had to identify a quantifiable level of exposure to a particular chemical necessary to cause the plaintiff’s prostate cancer.38 Instead, the court stated that defining a specific threshold dose of exposure is sufficient to make general causation testimony admissible, but it is not necessary.39 Experts can alternatively provide evidence of a “harmful level” of exposure using qualitative evidence such as epidemiological studies or well-established associations between occupational exposure and the condition at issue.40

The court ultimately determined the plaintiff’s expert failed to establish general causation because they provided no evidence that the plaintiff’s exposure included any specific chemical that could cause the specific injury alleged, prostate cancer.41 Lacking admissible testimony on general causation, the plaintiff’s case was doomed, no matter the specific causation testimony. 

The Ruffin court also addressed the “threshold dose” quantification. Traditionally, courts have required plaintiffs to demonstrate a “threshold dose” — that is, to identify a quantifiable level of exposure to a particular chemical that is necessary to cause the harm alleged.42 In Ruffin, however, the Fifth Circuit divorced the general causation analysis from specific causation and articulated a different approach to the general causation requirement.

A few months later, Williams v. BP, decided by the same Fifth Circuit panel, clarified that a defect in specific causation is equally fatal to a claim. In Williams, the plaintiff similarly worked cleaning up after the oil spill, which he argued caused his chronic pansinusitis.43 Unlike Ruffin, the Williams court only addressed specific causation.44 It excluded both plaintiff’s experts’ specific causation opinions — one based on flawed methodology and the other on reliability questions.45 Having excluded all specific causation opinions, the court “decline[d] to reach the question of general causation.”46

Taken together, Ruffin and Williams illustrate that general and specific causation are independent requirements in toxic tort cases, and a plaintiff cannot prevail past summary judgment without substantiating both individually. Further, practitioners should also take note of the Fifth Circuit’s departure from the traditional “threshold dose” analysis as it applies to general causation, a point that is already being picked up by lower courts and will likely continue spreading.47

In Closing

The year 2025 has seen some significant legal developments with potential impact for drug and device manufacturers. We expect 2026 to be a busy year, and drug and device companies should remain vigilant and nimble in adapting to these legal shifts. 

With that — cheers to a new year! May it bring interesting and positive legal developments to the world of drug and device law!

  1. https://www.law360.com/productliability/articles/2419227?nl_pk=af1e6f05-879d-4e64-871f-e6fa6e020839&utm_source=newsletter&utm_medium=email&utm_campaign=productliability&utm_content=2025-12-09&read_main=1&nlsidx=0&nlaidx=0 
  2. Id. 
  3. Id. 
  4. Id. 
  5. EU Directive — https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202402853 
  6. Article 8(1); Article 9(1) and (2); Article 10; Article 11(1)(e); Article 18(1).
  7. Article 10(2). 
  8. Article 10(3) and Article 10(4). 
  9. Article 11. 
  10. Article 8(1)(a) and Article 8(1)(b). 
  11. Article 8(1)(c). 
  12. Article 4(1). 
  13. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202402853 (noting in paragraph 51 that “The possibility for economic operators to avoid liability by proving that the defectiveness came into being after they placed the product on the market or put it into service should be restricted when a product’s defectiveness consists in the lack of software updates or upgrades necessary to address cybersecurity vulnerabilities and maintain the safety of the product”). 
  14. Article 2(2). 
  15. https://cms-lawnow.com/en/ealerts/2025/11/transposition-time-update-on-the-eu-member-states-adoption-of-the-new-product-liability-directive (noting that the Netherlands, Sweden, Finland, Germany, Italy, Hungary, Denmark, Austria, Czech Republic, Romania and Slovenia have begun taking steps to transpose the directive, including by proposing implementation legislation in some). 
  16. The text of the RAD is available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2020.409.01.0001.01.ENG
  17. Okanyi, Zsolt, et al., “European Class Action Report 2025,” CMS, available at https://cms.law/en/media/international/files/cms-european-class-action-report-2025?v=3
  18. Katie Adams & Maya Sandalow, FDA Oversight: Understanding the Regulation of Health AI Tools (Bipartisan Policy Ctr. June 2025), https://bipartisanpolicy.org/issue-brief/fda-oversight-understanding-the-regulation-of-health-ai-tools/.
  19. 21 U.S.C. § 321(h).
  20. U.S. Food & Drug Admin., Software as a Medical Device (SaMD), Digital Health Center of Excellence, https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd; U.S. Food & Drug Admin., Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management & Marketing Submission Recommendations, Draft Guidance (Jan. 6, 2025), https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing.
  21. See U.S. Food & Drug Admin., Generative Artificial Intelligence–Enabled Digital Mental Health Medical Devices (Nov. 6, 2025), https://www.fda.gov/media/189391/download.
  22. U.S. Food & Drug Admin., Marketing Submission Recommendations for Predetermined Change Control Plan for Artificial Intelligence / Machine Learning–Based Software as a Medical Device, Guidance (Feb. 15, 2021), https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence.
  23. U.S. Food & Drug Admin., Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft Guidance, Jan. 2025), https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing.
  24. Id.
  25. U.S. Food & Drug Admin., Cybersecurity for Medical Devices: Quality System Considerations and Content of Premarket Submissions (Guidance, Oct. 2, 2014), https://www.fda.gov/regulatory-information/search-fda-guidance-documents/cybersecurity-medical-devices-quality-system-considerations-and-content-premarket-submissions.
  26. Katie Adams & Maya Sandalow, FDA Oversight: Understanding the Regulation of Health AI Tools (Bipartisan Policy Ctr. June 2025), https://bipartisanpolicy.org/issue-brief/fda-oversight-understanding-the-regulation-of-health-ai-tools/.
  27. Nat’l Conf. of State Legislatures, Artificial Intelligence 2025 Legislation (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation.
  28. U.S. Senate Committee on the Judiciary, Durbin, Hawley Introduce Bill Allowing Victims to Sue AI Companies (Oct. 16, 2025), https://www.judiciary.senate.gov/press/dem/releases/durbin-hawley-introduce-bill-allowing-victims-to-sue-ai-companies.
  29. Manatt, Phelps & Phillips, LLP, Manatt Health: Health AI Policy Tracker (Oct. 11, 2025), https://www.manatt.com/insights/newsletters/health-highlights/manatt-health-health-ai-policy-tracker.
  30. Ensuring a National Policy Framework for Artificial Intelligence, Exec. Order No. (Dec. 11, 2025), https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.
  31. Id.
  32. Id.
  33. Id.
  34. Federal Trade Commission (FTC), FTC Launches Inquiry Into AI Chatbots Acting as Companions (Sept. 11, 2025), https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions.
  35. U.S. Food & Drug Admin., Measuring and Evaluating Artificial Intelligence-Enabled Medical Device Performance in the Real World; Request for Public Comment (FDA-2025-N-4203, Request for Public Comment) (2025), https://www.fda.gov/medical-devices/digital-health-center-excellence/request-public-comment-measuring-and-evaluating-artificial-intelligence-enabled-medical-device.
  36. Ruffin v. BP Expl. & Prod., Inc., 137 F.4th 276 (5th Cir. 2025); Williams v. BP Expl. & Prod., Inc., 143 F.4th 593 (5th Cir. 2025).
  37. Ruffin, 137 F.4th at 278.
  38. Id. at 282–833.
  39. Id. at 282. 
  40. Id. at 283.
  41. Id.
  42. See e.g., Allen v. Pennsylvania Eng’g Corp., 102 F.3d 194, 199 (5th Cir. 1996) (describing that “[s]cientific knowledge of the harmful level of exposure to a chemical” is a “minimal fact[ ] necessary to sustain the plaintiffs’ burden in a toxic tort case.”). 
  43. Williams v. BP Expl. & Prod., Inc., 143 F.4th 593, 596 (5th Cir. 2025).
  44. Id. at 599–602. 
  45. Id. at 599–601. 
  46. Id. at 599. 
  47. See Perrotti v. Lockheed Martin Corp., 2025 WL 2554425, at *15 (M.D. Fla. Sept. 2, 2025) (“[Ruffin] teaches us not to confuse what is sufficient to prove causation — a precise quantitative dose — with what is required — some level of exposure that shows the substance is capable of causing the relevant harm.”).