INTRODUCTION
The following two-part discussion highlights two important developments in recent efforts to address the uses of Artificial Intelligence in creating fictitious and damaging content. The topics are, respectively, the creation and detection of false evidence in adjudicative and deliberative processes; and (2) the proliferation of injurious content on the Internet, assisted through the use of Artificial Intelligence. The issues are grouped together because they share some common technological concerns – and they are noted together in the recent White House document, Winning the Race: AMERICA’S AI ACTION PLAN. The style of treatment, as with other parts of this series, includes some historical context, meditation and (hopefully) constructive wandering, treatment of related issues, and focus on the current and potential future state of pertinent law and processes. In many ways, the style departs from the usual forms of legal discussion. However, this series of discussions on Artificial Intelligence was conceived as an alternative to the proliferation of articles in more traditional formats.
The core conclusions with regard to the topics treated in the following discussions are: (1) we need a readily available, competitively-checked deepfake and digital fraud detection resource for use in conjunction with evidentiary rules in their current or amended form; (2) the TAKE IT DOWN Act’s provisions, including takedown procedures for injurious visual content, should be expanded to expressly include audio content; and (3) the model provided by the TAKE IT DOWN Act’s expedited, bi-partisan consideration should now be focused on the roles of Artificial Intelligence in creating potentially dangerous Large Language Models services or products, such as chatbots. These points are placed in context in the relevant sections of the discussions, but other less prominent issues – such as the appropriateness of criminal sanctions provided by the TAKE IT DOWN Act and the deadline to institute notice and takedown procedures – are also considered.
AUTHOR

Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article are solely the work of the author and do not comprise legal advice. They are not for attribution to any entity represented by the author or with which he is affiliated or is a member – including, e.g., the firm in which he is a member or any of its clients. All Internet citations and links in this article were visited and validated on September 4, 2025.




Gary Rinkerman is a partner at the law firm of FisherBroyles LLP, an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, and a Senior Fellow at the Center for Assurance Research and Engineering (“CARE”) in the College of Engineering and Computing at George Mason University, Virginia. For those interested in “digital archeology,” Professor Rinkerman also successfully argued one of the first cases in which copyright in object code was enforced and he co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this article are attributable solely to the author and do not necessarily reflect the views of any persons, organizations or entities with which he is affiliated or whom he represents. The author would also like to thank J.P. Auffret, Director of CARE, for his continuing support and for his expertise in the frontier areas of Artificial Intelligence.

