ABSTRACT
Attempts to exercise control over corporate artificial intelligence (“AI”) adoption, development, implementation, and management have surfaced in the form of Executive Orders, legislation, rulemaking, union and guild agreements, and case law. However, shareholders in publicly traded corporations have also used the shareholder Proposal and Proxy voting process to introduce sweeping or focused efforts to force corporate management to address and disclose information on a variety of AI-related issues. These Proposals are often couched in language that references ethical and significant social policy issues that arise in course of the target company’s AI-related business plans and activities. Of course, the determination of what is “ethical” or what constitutes a “significant social policy issue” can be quite subjective or driven by the advocate’s ancillary political and economic goals. This article discusses three recent attempts to use the shareholder Proposal and Proxy voting process to impose AI-related “transparency” and control requirements on major users of the technology. The common thread that runs through each effort is the strategic use of shareholder initiatives to attempt to influence the policies and management approach to AI and AI-related issues, including, in some instances, specific security, privacy, copyright, and personality rights issues.
AUTHOR
Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article express solely the work of author and do not comprise legal advice and are not for attribution to any entity represented by the author or with which he is affiliated or a member.