November 19, 2018

Under the Wire: A Brief Sketch of a Theory for Defending Private Figure Libel Suits in an Artificial Intelligence World

By: Michael A. Giudicessi and Leita Walker[1]

 

As science fiction writers confronted how life, or death, might look in the hands of artificial intelligence, they perhaps painted no better picture than in 2001: A Space Odyssey[2] when the HAL 9000 computer defiantly says, “I’m sorry, Dave, I’m afraid I can’t do that” and proceeds to cause the death of four crew members.[3]

 

Today, as technology catches up with science fiction from 50 years ago, lawyers find themselves considering how to prosecute or defend crimes and tort claims arising from conduct, errors, or omissions created, directed, or assisted by computers. 

 

Thus, current legal writings contemplate ways to respond to lawsuits stemming from self-driving car mishaps[4] or how to define what is a “reasonable algorithm.”[5]

 

First Amendment lawyers likewise await lawsuits where the common law of libel—and, in some cases, the constitutional actual malice standard of New York Times Co. v. Sullivan[6]—will apply to publication of information generated in whole or in part without the personal touch, or subjective assessment, of reporters and editors.

 

Already, leading news organizations rely on algorithms to gather and report the news, especially in the areas of sports and finance, which (unlike certain investigative, enterprise, or human-interest stories) offer structured data from which artificial intelligence (“AI”) can easily build a narrative.[7] And outside of the United States, plaintiffs have met with some success in bringing defamation claims over AI-generated “speech”—including, for example, against Google and its Autocomplete search feature.[8]

 

Through journal articles with provocative titles such as “Employing Robot Journalists: Legal Implications, Considerations and Recommendations,[9]Libel by Algorithm? Automated Journalism and the Threat of Legal Liability,”[10] and “Injury by Algorithm: A Look into Google’s Liability for Defamatory Autocompleted Search Suggestions,”[11] scholars already are shaping the thinking about how principles of libel law will apply in cases based on computer-created content. 

 

Other scholars have concluded —we think rightly—that public officials/figures will face great difficulty winning libel suits involving AI-generated speech, given the high standard of fault the First Amendment requires such plaintiffs to prove.[12] But what about private figures, who (depending on the jurisdiction) need prove only negligence or breach of a professional standard of care? This article proposes that even in a private figure case, publishers who use algorithms—though perhaps not the developers who create them—can rely on the well-established wire-service defense to avoid liability.

 

Actual malice? Only if it’s a “fake news” algorithm

 

Beginning in 1964, with Times v. Sullivan, the Supreme Court has placed a high burden on public officials and public figures in libel cases and in certain false light privacy cases,[13] requiring them to prove, by clear and convincing evidence, that the defendant published the allegedly defamatory statements with “actual malice”—i.e., either knowing they were false or with reckless disregard for the truth.[14]Reckless disregard” has been held to mean a “high degree of awareness of probable falsity.”[15]

 

The actual malice test is highly subjective, looking at the defendant’s state of mind toward the facts at the time of publication. “[R]eckless conduct is not measured by whether a reasonably prudent man would have published, or would have investigated before publishing. There must be sufficient evidence to permit the conclusion that the defendant in fact entertained serious doubts as to the truth of his publication.”[16]

 

Meanwhile, algorithms such as Google’s Autocomplete search feature—which helps the user refine her search by making ever-more specific suggestions based on each additional keystroke—are based on highly variable factors such as a user’s own search history, her language and geographic location; the popularity of particular search terms; and “freshness” factors that pick up on current events and trending topics.[17]

 

Thus, unless the dawn of algorithm-driven news reporting precipitates some major change in the law, a public official/figure would have to prove either (1) that the defendant knew that the algorithm would generate a particular phrase in a news report and  that the phrase would be false or (2) that although the defendant did not intentionally build the algorithm to lie, he acted recklessly in his failure because he ignored a “high awareness” that future events and/or machine learning might impact the results and result in probable falsity.

 

Except in the cases of an algorithm “intentionally programmed to develop and produce false content”[18]—i.e., an actual “fake news” algorithm— or where the user did so while purposefully avoiding the truth,[19] this burden seems all but insurmountable. Clearing that hurdle seems improbable if the defendant is a third-party programmer, who literally would have to be able to see into the future to understand how AI might misread yet-to-be created data to generate falsehoods. Overcoming it seems even more remote if the defendant is a shoe-leather journalist who knows the AP Stylebook by heart but thinks C++ is just an ironic way of saying “extra average.”[20] The curmudgeonly, analog reporter or editor cannot possibly be charged with knowledge that some black-box algorithm he doesn’t understand (does he even know it exists?) will generate bad information.

 

Private figures and the wire service defense

 

The U.S. Supreme Court has declined to impose the burden of proving actual malice on private figure plaintiffs, largely leaving issues of fault to the discretion of states.[21]

 

The states, in turn, have mostly opted for imposing a lower-standard of fault on private figure plaintiffs, such as negligence or breach of the professional standard of care exercised by similarly situated journalists.

 

Importantly this lower standard is not subjective—it does not look at the journalist’s state of mind at the time of publication. Rather, it is objective—it looks at what a reasonable person would have done. Thus, sub-par programming or even failure to have a human fact-check and otherwise review algorithm-generated news reports—if human review is industry standard—could result in liability.

 

Cue the wire-service defense, which holds that “[a] local media organization is not responsible for defamation if it republishes a news release from a reputable news agency without substantial change and without actually knowing that the article is false.”[22] As the leading treatise on defamation law explains, “the doctrine finds footing in the principle that liability for such rebroadcast would be liability without fault contrary to Gertz.[23]  In states accepting the defense, publishers that take news reports from reputable wire services/newspapers at face value can be relieved of the need to conduct an independent investigation.

 

Presumably, those states accepting the defense would extend it only to publications that use “reputable” algorithms—a test that could be satisfied by looking at the algorithm’s developer (Google? The Associated Press? Someone in North Korea?), by assessing its track record for generating accurate news reports, or by both. But assuming an algorithm meets the reputable and reliable requirements, there is no obvious legal basis for distinguishing between a machine-produced story and a human-produced story. Indeed, given the news ecosystem, it seems entirely possible that the overnight editor who needs content for page 5A will not even know who (or what?) wrote the story he pulls from the wire.

 

Conclusion

 

In the end, as First Amendment protection butts up against looming technology and as editors and publishers are forced to defend content selections made by artificial intelligence, perhaps another observation of the HAL 9000 provides the proper context:

 

Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.

 

But, temper that advice with the knowledge the HAL 9000 supplied that recommendation shortly after killing Dave’s four crewmates.



[1] The authors practice First Amendment law at Faegre Baker Daniels LLP.  The opinions expressed here are theirs alone, not those of any human or machine colleague.

[2] According to IMD.com, 2001: A Space Odyssey, directed by Stanley Kubrick and written by Kubrick and Arthur C. Clarke, was released in 1968 and ranks 89th in its top 250 movies as rated by users.  See https://www.imdb.com/title/tt0062622/?ref_=nv_sr_1 (last accessed September 11, 2018).

[3] By its own account, the HAL 9000 computer that took over Discovery One spacecraft “became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992” where it was programmed “by a Mr. Langley.”  Of course, the HAL 9000 depended on a Hollywood voiceover to speak and how that came to pass is detailed in The New York Times obituary for Canadian actor Douglas Rain, who died on November 11, 2018, at age 90.  See https://www.nytimes.com/2018/11/12/obituaries/douglas-rain-dead.html?rref=collection%2Fsectioncollection%2Fobituaries&action=click&contentCollection=obituaries&region=rank&module=package&version=highlights&contentPlacement=2&pgtype=sectionfront (last accessed November 13, 2018).  That obituary further reported,” The American Film Institute once listed the 50 greatest movie villains. HAL came in at No. 13.”

[4] See “5 Defenses for Autonomous Vehicles Litigation”, available at https://www.faegredrinker.com/en/insights/publications/2018/9/5-defenses-for-autonomous-vehicles-litigation(last accessed September 3, 2018)

[5] Karni Chagal-Feferkorn, “The Reasonable Algorithm,” 2018 U. Ill. J.L. Tech. & Pol'y 111, 148 (2018).

[6] New York Times Co. v. Sullivan, 376 U.S. 254 (1964).

[7] Lewis, S.C., Sanders, A.K. and Carmody, C., “Libel by Algorithm? Automated Journalism and the Threat of Legal Liability,” Journalism & Mass Communication Quarterly at 3 (2018), available at http://jonathanstray.com/papers/Lewis%20-%20Libel%20by%20Algorithm.pdf.

[8] See Seema Ghatnekar., Injury by Algorithm: A Look into Google’s Liability for Defamatory Autocompleted Search Suggestions, 33 Loy. L.A. Ent. L. Rev. 171, 182 (2013).

[9] Ombelet, P.J., Kuczerawy, A. and Valcke, P., “Employing robot journalists: Legal Implications, Considerations and Recommendations,” Proceedings of the 25th International Conference Companion on World Wide Web at 731–36 (April 2016).

[10] Lewis, et al., supra note 6.

[11] Ghatnekar, supra note 7.

[12] Lewis, et al., supra note 6 at 9–10 (“Even if the content an algorithm produces is false, a public figure plaintiff would likely struggle to prove actual malice under the Court’s current standard because algorithms, in and of themselves, do not engage in … subjective decision-making processes.”).

[13] See Time, Inc. v. Hill, 385 U.S. 374 (1967).

[14] New York Times, 376 at 279–80.

[15] Garrison v. Louisiana, 379 U.S. 64, 74 (1964).

[16] St. Amant v. Thompson, 390 U.S. 727, 731 (1968).

[17] See Ghatnekar, supra note 7 at 180–81.

[18] Lewis, et al., supra note 6 at 9.

[19] See Harte-Hanks Communications, Inc. v. Connaughton, 491 U.S. 657 (1989).

[20] C++ is a programming language.

[21] Gertz v. Robert Welch, Inc. 418 U.S. 323, 347 (1974) (“So long as they do not impose liability without fault, the States may define for themselves the appropriate standard of liability for a publisher or broadcaster of defamatory falsehood injurious to a private individual.”).

[22] Hon. Robert D. Sack, Sack on Defamation: Libel, Slander, and Related Problems § 7:3.3. (Fifth Edition, Apr. 2018 Supp.).

[23] Id.; see also supra note 19.

 

Full Article

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.