October 17, 2023

AI in Legal Practice

The Product of AI

The Product of AI

The Product of AI

Explores whether AI models are legally products or ideas, focusing on liability and complex legal implications.

Explores whether AI models are legally products or ideas, focusing on liability and complex legal implications.

Explores whether AI models are legally products or ideas, focusing on liability and complex legal implications.

Amy Swaner

The Product of AI

As AI becomes more and more pervasive it is only natural in our litigious society that lawsuits start to define liability of activities involving AI.  A good place to start is with the quesiton “Is an AI model a product or just an idea?”  The answer to this simple question is extremely complex. AI models are designed, developed, tested and maintained just like other software products. And they can be licensed and monetized. These factors make it appear to be a product. However, AI models lack clear physical or digital manifestation, and they have no intrinsic value without being trained on a great deal of data, and also fine-tuned and tested.

Thanks for reading Amy’s Substack! Subscribe for free to receive new posts and support my work.Subscribed

The case of Rodgers v. Christie, No. 19-2616 (3d Cir. Mar. 6, 2020) (Dist. Ct. NJ 2020), unpublished[1], is a good example of why this question is important, and gives some insight into why the answser is so complex.  June Rodgers sued, among others, the Laura and John Arnold Foundation (LAJAF) as developers of the AI model entitled the Public Safety Assessment (PSA) after the model was used in a decision to release a man from prison prior to his trial.[2]  Days after being released the accused man murdered Christian Rodgers, son of the plaintiff June Rodgers.

Ms. Rodgers sued under the New Jersey Products Liability Act, alleging that this multifactor risk estimation model (the AI model) that the state relied on in deciding whether to release an alleged wrongdoer prior to their trial, was a “defective product.” Although the New Jersey Products Liability Act doesn’t define a “product” the court turned to the Third Restatement of Torts for a definition.  Because the Restatement defines a product as tangible personal property and an AI model is not tangible personal property, the district court dismissed the case.

The district court’s reasoning was that New Jersey’s products liability act applies only to defective products, and that the model was not a product and instead was merely an idea. The court did not extend product liability to everything that causes harm or fails to achieve its purpose. Further, the court held that extending strict liability to “ideas” would raise serious First Amendment concerns. On appeal, the Third Circuit Court of Appeals affirmed the dismissal.  But was this sound reasoning by the New Jersey District Court and court of appeals?

Setting aside the First Amendment issue, the logical extension of this finding is that the developers had developed an “idea” not a product. Was the LAJAF really only selling an idea?  Another area of law where product and idea are relevant is federal patent law. Although under Federal patent law you cannot patent an idea, you can patent a process, which is defined as a combination of steps or methods. See 35 U.S.C. § 101, (https://www.uspto.gov/sites/default/files/101_step1_refresher.pdf). Based on this definition algorithms which involve a process should be patentable if they meet the rest of the criteria for patentability. In Diamond v. Chakrabarty, the Supreme Court held that Congress intended patentable items to "include anything under the sun that is made by man."  See Diamond v. Chakrabarty, 447 U.S. 303 (1980).  Under this definition, an AI model that is comprised of a process should be patentable as a product.

At what point would the AI model involved in Rodgers v. Christie become more than just a mere idea?  At what point could it be considered a product, and therefore patentable, and likewise therefore a product to which the New Jersey products liability act would apply?  Were the courts convinced that the New Jersey Assessment Board had really only purchased the use of an “idea?”  The Assessment Board went through the process of feeding relevant data into an AI model. They then used the resulting information to guide their decision.  Perhaps the AI model was not appropriately trained, or perhaps it needed to be fine-tuned.  But it is undisputed that the Assessment Board used the guidance of the LAJAF’s AI model.  Surely the Assessment Board should not be able to duck liability if their AI model was defective.

It should be noted that as an unpublished opinion, Rodgers v. Christie has no precedential value. Not even New Jersey courts would be required to give deference to its ruling. And so the case sheds very little light on the important issue of potential products liability for AI models. 

Do you agree with the court, that the model was not a “product?”  Do you believe AI model creators and developers should have liability in regard to the way their product is used?


[1] https://casetext.com/case/rodgers-v-christie#N196646

[2] The factors used in the PSA can be found at: https://www.njcourts.gov/sites/default/files/psariskfactor.pdf

More Like This

Enhancing Productivity With AI

Find low-barrier ways to add GenAI productivity into your work space using tools you likely already have access to.

AI in Law: Mastering the Governance Puzzle

Welcome to the era of AI Data Governance—a field where technology, law, and ethics intersect

What the Salt Typhoon Hack Means for the Future of Global AI

The New AI Arms Race Affects AI Regulatory Systems, Laws, and Treaties

©2024 Lexara Consulting LLC. All Rights Reserved.