Center for Assurance Research and Engineering

Stablecoins, Money-Market Funds and the S&L Crisis — Editorial by Thomas P. Vartanian

Join CARE Mailing List

Lightly regulated new financial instruments can have serious consequences if lawmakers and regulators aren’t careful.

Celebrations over the recent passage of the Genius Act should be muted. It institutionalized cryptocurrencies in the U.S. by creating the first comprehensive regulations for stablecoins. But all too often when Washington lets new financial products on the scene, they aren’t adequately regulated and quickly gain an advantage over existing financial instruments that are more heavily policed.

Winning the Race: America’s AI Action Plan — Article by Gary Rinkerman

Join CARE Mailing List

INTRODUCTION

On July 23, 2025, the Trump Administration published Winning the Race: AMERICA’S AI ACTION PLAN (“AI Action Plan”).[1]  Among the goals of the AI Action Plan is the elimination of inappropriate bias[2] and false information in the government’s AI systems.  The corruptions caused by inappropriate biasing, which may be introduced at any number of stages in what has been called “the AI pipeline,” lead to systems and outputs that are unreliable and, in some instances, injurious.[3]  A core concern regarding the outputs of such flawed systems is that the outputs can contain AI hallucinations – a phenomenon in which the system “in a large language model (LLM), often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”[4] The roots of this concern are diverse and run deep, but a conspicuous and widely publicized example of hallucinating AI is provided by Google’s attempt to avoid bias and promote diversity in the outputs of its  generative AI Gemini model.  Google’s tampering with historical truths was ostensibly well-intentioned and harmless, but it pointed out some underlying AI-related concerns that have more serious and potentially injurious consequences.  These consequences arise when a flawed AI system’s output is being relied on to make decisions that affect people’s lives. The following text briefly recaps the Google Gemini matter and then discusses aspects of the AI Action Plan that are intended to identify and eliminate inappropriate bias and falsity in government AI systems.

*This article is the first in a series of discussions of Winning the Race: AMERICA’S AI ACTION PLAN, issued by the Trump Administration on July 23, 2025.

AUTHOR

Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article are solely the work of the author and do not comprise legal advice.  They are not for attribution to any entity represented by the author or with which he is affiliated or a member.  All Internet citations and links in this article were visited and validated on July 27, 2025.

AI Proxy Wars: The Struggle For Control Of Corporate Adoption And Use Of Artificial Intelligence Technologies — Article by Gary Rinkerman

Join CARE Mailing List

ABSTRACT

Attempts to exercise control over corporate artificial intelligence (“AI”) adoption, development, implementation, and management have surfaced in the form of Executive Orders, legislation, rulemaking, union and guild agreements, and case law. However, shareholders in publicly traded corporations have also used the shareholder Proposal and Proxy voting process to introduce sweeping or focused efforts to force corporate management to address and disclose information on a variety of AI-related issues. These Proposals are often couched in language that references ethical and significant social policy issues that arise in course of the target company’s AI-related business plans and activities. Of course, the determination of what is “ethical” or what constitutes a “significant social policy issue” can be quite subjective or driven by the advocate’s ancillary political and economic goals. This article discusses three recent attempts to use the shareholder Proposal and Proxy voting process to impose AI-related “transparency” and control requirements on major users of the technology. The common thread that runs through each effort is the strategic use of shareholder initiatives to attempt to influence the policies and management approach to AI and AI-related issues, including, in some instances, specific security, privacy, copyright, and personality rights issues.

AUTHOR

Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article express solely the work of author and do not comprise legal advice and are not for attribution to any entity represented by the author or with which he is affiliated or a member.

Crypto’s $10 trillion runaway train potentially threatens the financial system — Editorial by Thomas P. Vartanian

Join CARE Mailing List

 

By Thomas P. Vartanian
The Hill (thehill.com)

As Congress considers legislation on crypto stablecoins, the Securities and Exchange Commission on April 4 freed certain stablecoins from the burden of registering as securities if, among other things, they are fully backed by high-quality liquid assets such as Treasurys.

The October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: Is It Making Your Intellectual Property More Secure?, CARE Article by Gary Rinkerman

Join CARE Mailing List

ABSTRACT

The recent Biden White House Executive Order on artificial intelligence is a sweeping attempt to assess, monitor, regulate, and direct developments in this important area of technological growth.  However, while the Order contemplates massive and thorough (arguably intrusive) collections of information, including information that will be trade secret and otherwise commercially valuable, it does not specifically address the issue of how better to ensure that government officials, employees, agents, and contractors have proper training to make sure that third-party proprietary rights in that information are preserved and the information is not “leaked” or otherwise improperly published by those acting under color of federal authority.  In addition, while the Order seeks information to better assess the refusal by the U.S. Copyright Office and the U.S. Patent Office to afford protection to matter created wholly by artificial intelligence, there is a lack of specific direction on the potential need to alter these positions or focus on developing – at the federal or state levels – new forms of intellectual property protection for such matter.

JANUARY 2024

AUTHOR

Gary Rinkerman is a partner at the law firm of FisherBroyles LLP, an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, and a Senior Fellow at the Center for Assurance Research and Engineering (“CARE”) in the College of Engineering and Computing at George Mason University, Virginia. For those interested in “digital archeology,” Professor Rinkerman also successfully argued one of the first cases in which copyright in object code was enforced and he co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this article are attributable solely to the author and do not necessarily reflect the views of any persons, organizations or entities with which he is affiliated or whom he represents. The author would also like to thank J.P. Auffret, Director of CARE, for his continuing support and for his expertise in the frontier areas of Artificial Intelligence.

The Great Debate: How to Modernize Financial Regulation and Create Economic Stability in a Digital Age

Join CARE Mailing List

 

The regulation of financial services in the U.S. is based on a bank-centric model established for the conditions that existed 90 years ago after the Great Depression. Today’s financial services world is far more diversified with less of it regulated. As a result, we continue to see events that seriously destabilize our financial system.

As the Administration considers the future of financial services regulation, it is time to evaluate how we can create a financial regulatory system that (i) reduces financial stability risks ignored by asymmetrical oversight, (ii) confronts known structural risks created by the digital economy and (iii) deploys and utilizes predictive artificial intelligence to deal with threats before they spiral out of control.

Intellectual Property Aspects of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — Article by Gary Rinkerman

Join CARE Mailing List

ABSTRACT

JANUARY 2024

The proliferation of AI tools in the arts, commercial design industries, and other endeavors has raised core questions regarding who or what actually supplied the alleged creative or inventive elements, if any, to the AI system’s output. In both U.S. copyright and patent law the question focuses on a case-by-case analysis as to how much of the final product evidences human “authorship” or invention. Also, creativity as well as infringement, can be located in various phases of the AI system’s creation, ingestion of training materials, management, and operation – including its output, whether affected prior to the output or after it. Issues such as liability for selecting ingestion materials or target data, as well as the potential inadvertent triggering of patent law’s bar date through use of specific AI systems, have also come to the forefront of AI’s potential to secure, forfeit, or impact claimed proprietary rights in AI-assisted creative and inventive activities. Several alternative intellectual property and unfair competition approaches that can supplement or supplant copyright and patent law principles also come into play as users of AI seek to protect the products of their efforts.

AUTHOR

Gary Rinkerman is a partner at the law firm of FisherBroyles LLP, an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, and a Senior Fellow at the Center for Assurance Research and Engineering (“CARE”) in the College of Engineering and Computing at George Mason University, Virginia. For those interested in “digital archeology,” Professor Rinkerman also successfully argued one of the first cases in which copyright in object code was enforced and he co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this article are attributable solely to the author and do not necessarily reflect the views of any persons, organizations or entities with which he is affiliated or whom he represents. The author would also like to thank J.P. Auffret, Director of CARE, for his continuing support and for his expertise in the frontier areas of Artificial Intelligence.

Analyser son marché pour dénicher les opportunités d’Affaires

Join CARE Mailing List

Serge Adouaka, CARE and the U.S. Embassy Bangui, CAR hosted 25 to 30 technology entrepreneurs on February 20th, 2025, as part of a conclusion to CARE's Central African Republic (CAR) Technology Entrepreneurship and ICT Leadership grant. The topic of the forum was identifying market opportunities and fostering a technology entrepreneur network in CAR.
Ministère des PME et de la Promotion du Secteur Privé-RCA — in Bangui, Central African Republic.

Ministère des PME et de la Promotion du Secteur Privé-RCA — in Bangui, Central African Republic.

Ministère des PME et de la Promotion du Secteur Privé-RCA — in Bangui, Central African Republic.

Ministère des PME et de la Promotion du Secteur Privé-RCA — in Bangui, Central African Republic.

Technology and Education

Join CARE Mailing List

Technology experts emphasize the ethical and policy challenges of integrating technology into education, advocating for curricula that prepare students for a digital future while addressing issues of equity, access, and digital literacy. Concerns include the ethical use of AI-driven learning tools, data privacy in student tracking systems, and the digital divide that can exacerbate educational inequalities. Ensuring that students receive relevant, up-to-date instruction on technology—covering topics like cybersecurity, AI bias, and responsible AI use—is crucial for fostering informed and critical users of emerging technologies. Policymakers and educators need to develop policies that promote ethical technology use, ensure teacher training keeps pace with advancements, and incorporate diverse perspectives to create inclusive, future-ready learning environments.

Future Workforce

Join CARE Mailing List

Ethical and forward-thinking policies need to address the impact of automation and AI on the future workforce. Concerns include job displacement, skill gaps, and the need for continuous workforce re-skilling to keep pace with technological advancements. Ensuring that AI-driven automation enhances rather than replaces human labor requires policies that promote equitable access to retraining programs and foster collaboration between industry, government, and educational institutions. Experts warn against unchecked technological adoption that prioritizes efficiency over worker well-being, advocating instead for policies that protect labor rights, prevent bias in AI-driven hiring and performance assessments, and create new opportunities for meaningful employment. A proactive, ethical approach to workforce transformation is essential to ensuring that technology benefits society as a whole rather than deepening economic inequalities.