Center for Assurance Research and Engineering

Winning the Race: America’s AI Action Plan, Part 2 and Part 3 — Article by Gary Rinkerman

Join CARE Mailing List

INTRODUCTION

The following two-part discussion highlights two important developments in recent efforts to address the uses of Artificial Intelligence in creating fictitious and damaging content.  The topics are, respectively, the creation and detection of false evidence in adjudicative and deliberative processes; and (2) the proliferation of injurious content on the Internet, assisted through the use of Artificial Intelligence.  The issues are grouped together because they share some common technological concerns – and they are noted together in the recent White House document, Winning the Race: AMERICA’S AI ACTION PLAN.  The style of treatment, as with other parts of this series, includes some historical context, meditation and (hopefully) constructive wandering, treatment of related issues, and focus on the current and potential future state of pertinent law and processes.  In many ways, the style departs from the usual forms of legal discussion.  However, this series of discussions on Artificial Intelligence was conceived as an alternative to the proliferation of articles in more traditional formats.                       

The core conclusions with regard to the topics treated in the following discussions are: (1) we need a readily available, competitively-checked deepfake and digital fraud detection resource for use in conjunction with evidentiary rules in their current or amended form; (2) the TAKE IT DOWN Act’s provisions, including takedown procedures for injurious visual content, should be expanded to expressly include audio content; and (3) the model provided by the TAKE IT DOWN Act’s expedited, bi-partisan consideration should now be focused on the roles of Artificial Intelligence in creating potentially dangerous Large Language Models services or products, such as chatbots.  These points are placed in context in the relevant sections of the discussions, but other less prominent issues – such as the appropriateness of criminal sanctions provided by the TAKE IT DOWN Act and the deadline to institute notice and takedown procedures – are also considered.

AUTHOR

Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article are solely the work of the author and do not comprise legal advice.  They are not for attribution to any entity represented by the author or with which he is affiliated or is a member – including, e.g., the firm in which he is a member or any of its clients.  All Internet citations and links in this article were visited and validated on September 4, 2025.