Generative AI and the future of work in America, McKinsey Global Institute Report, July 26, 2023
Principles for the Development, Deployment, and Use of Generative AI Technologies, ACM Technology Policy Council (June 27, 2023)
Generative Artificial Intelligence (AI) is a broad term used to describe computing techniques and tools that can be used to create new content such as text, speech and audio, images and video, and computer code. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of generative AI systems, the speed of their evolution, broad application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.
CARE Race to Regulate the Internet program was held on May 8
(click image below to view on YouTube)
Should States or the Federal Government Set the Rules for Websites Content, Child Protection and Personal Data Control?
See headlines and details at the CARE Race to Regulate the Internet update.
Statement in Support of Mandatory Comprehensive Digital Accessibility Regulations
US Technology Policy Committee (May 31, 2024)
Content Provenance refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document).
Technical specifications were released in 2022 by the Coalition for Content Provenance and Authenticity (C2PA Content Credentials).
Data provenance refers to a documented trail that accounts for the origin of a piece of data and where it has moved from to where it is presently. The purpose of data provenance is to tell developers the origin, changes to, and details supporting the confidence or validity of data. The concept of provenance guarantees that data creators are transparent about their work -- where it came from and the chain of information where data can be tracked as data is used and adapted for their own purposes.
Statement On Mass Cybersecurity Incidents Likely to Recur, US Technology Policy Committee (August 11, 2024)
August 2024
We are writing to provide an update on some significant developments since our Race to Regulate the Internet program held on May 8 – video available here: https://youtu.be/5YBTmqFN7To
For most of the life of the internet, States for a variety of reasons, including early unfavorable court decisions, have generally not been inclined to attempt to impose requirements or restrictionon private party website content. In recent years, two large States – Florida and Texas, motivated by a perception that some large websites were discriminating against the presentation of conservative viewpoints – enacted laws designed to place requirements or restrictions on website content moderation intended to balance the range of views that are presented.
The Eleventh Circuit Court of Appeals affirmed a district court injunction against the Florida law, finding that the law was not likely to survive a review under the First Amendment. In contrast, the Fifth Circuit Court of Appeals reversed a district court injunction against the Texas law, based upon its determination that the law did not regulate any speech and thus did not implicate the First Amendment.
The Supreme Court’s review of these two laws on appeal was much anticipated. In a July 1 ruling, Justice Kagan joined by Chief Justice Roberts and Justices Sotomayor, Kavanaugh, Barrett and Jackson ruled that the trade association plaintiffs (NetChoice and the Computer & Communications Industry Association (CCIA)) had challenged the laws on their face (as compared to as applied) and thus had the very high burden of showing that a substantial number of a law’s applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep. The Court stated that the parties and the courts below had limited their analysis to a relatively narrow scope of website activities rather than address the full range of activities the laws covered and measure the constitutional versus unconstitutional applications of the law. Thus, the Court held that the parties had not briefed the critical issues, and the record was underdeveloped. As a result, the Court vacated the lower court decisions and remanded the cases for further consideration.
The Court went on to provide guidance for this further consideration. It indicated that it wanted to avoid having the Fifth Circuit conduct further proceedings on the basis that its First Amendment analysis was correct, where the Court found that the Fifth Circuit’s conclusions involved a serious misunderstanding of the Court’s First Amendment precedent. The Court stated that the Fifth Circuit was wrong in concluding that the restrictions placed on a website’s selection and presentation of content do not interfere with expression. Furthermore, it observed that the Fifth Circuit was wrong to treat as valid Texas’ interest in changing the content on the website’s feeds, in order to better balance the marketplace of ideas. The Court further observed that the Eleventh Circuit saw the First Amendment issues in the case much as the Court does.
Justice Alito, joined by Justices Thomas and Gorsuch, wrote that the only binding holding in the decisions was that the plaintiffs have yet to prove that the laws they challenged are facially unconstitutional, with which they agreed. He went on to take issue with much of the majority’s analysis.
We were fortunate to have representatives of CCIA (Stephanie Joyce) and NetChoice (Carl Szabo) join us as panelists at our May 8 program.
Moody v. NetChoice, 22-277_d18f.pdf (supremecourt.gov)
On June 26, the Supreme Court reversed a Fifth Circuit ruling granting a preliminary injunction to two States (Missouri and Louisiana) and five individuals based on allegations that Executive Branch agencies and officials had pressured certain websites to suppress protected speech in regard to COVID-19 (White House, CDC, Surgeon General) and the 2020 Election (FBI and CISA) in violation of the First Amendment. The injunction provided that the defendants and their employees shall not coerce or significantly encourage social media companies to remove, suppress, or reduce posted content containing protected free speech.
Justice Barrett wrote the Court’s opinion. She reviewed the allegations of the individual plaintiffs that the actions of the government had caused platforms to censor their content at the behest of the government and that this would continue to occur. The individual plaintiffs also argued that they had suffered injury to their right to listen to others. The States asserted a sovereign interest in hearing from their citizens on social media. In all instances the Court found that the plaintiffs had failed to demonstrate that they had standing to pursue their claims and reversed the ruling of the Fifth Circuit.
Justice Alito, joined by Justices Thomas and Gorsuch, dissented. He wrote that one of the individual plaintiffs had shown that Facebook’s censorship of her content resulted at least in part from the White House’s prompting of Facebook to amend its censorship policies, and therefore had standing. Justice Alito asserted that the government officials with potent authority had communications with Facebook that were virtual demands and that Facebook’s response showed that it felt a strong need to yield. As a result he concluded that the individual plaintiff was likely to prevail on her claim that the White House coerced Facebook into censoring her speech.
Murthy v. Missouri, 23-411_3dq3.pdf (supremecourt.gov)
As part of a report on the DOJ’s efforts to deal with foreign efforts to interfere with U.S. elections through information sharing inside and outside of the government, issued on July 23, the OIG touched on some of the issues presented in the Murthy case.
The OIG found that the DOJ did not have a comprehensive strategy guiding its approach to engagement with social media companies in regard to foreign malign influence threats (FMITs) on U.S. Elections, which the OIG believes creates a risk to the DOJ. The OIG observed that the FBI must be mindful that its interactions with social media companies could be perceived as coercion or significant encouragement aimed at convincing social media companies to limit or exclude speech posted by it users, which may implicate First Amendment protections, noting the Fifth Circuit’s opinion in Murthy.
During the course of its review the OIG recommended that the DOJ develop an approach to inform the public of its procedures for transmitting notices of (FMITs) to social media companies that is protective of First Amendment rights. The DOJ indicated that during the course of the lower court proceedings in Murthy in October 2023 it began developing a standardized approach for sharing FMITs with social media companies that appropriately accounts for First Amendment concerns.
This process led to the issuance of a standard operating procedure (SOP) that went into effect in February 2024. The DOJ stated that the SOP reflects the principle that it is permissible to share FMITs with social media companies, as long as it is clear that ultimately it is up to the company whether to take any action, including removing content or barring users based on such information.
OIG Press release DOJ OIG Releases Report of Investigation Regarding Alleged Unauthorized Contacts by FBI Employees with the Media and Other Persons in Advance of the 2016 Election (justice.gov), OIG Report 24-080.pdf (justice.gov)
To date, complaints about website censorship have come largely from conservatives inspiring, among other things, the Texas and Florida laws at issue in NetChoice v. Moody. In a change of pace, on July 23, Representative Jerrold Nadler, a Democrat from New York who is the Ranking Member on the House Judiciary Committee, wrote to Committee Chairman Jim Jordan regarding alleged censorship of Vice President Kamala Harris’ campaign handle on X.
Representative Nadler stated that numerous users reported that over the past two days when they tried to follow @KamalaHQ they received a “Limit Reached” message stating that the user is “unable to follow more people at this time.” He said that the messages do not make any sense as these users are otherwise free to follow other accounts. He stated that “[t]his suggests that X may be intentionally throttling or blocking Vice President Harris’ ability to communicate with potential voters. If true, such action would amount to egregious censorship based on political and viewpoint discrimination – issues this Committee clearly has taken very seriously.” Representative Nadler requested that the Committee immediately launch an investigation and request specified information from X.
Representative Nadler’s Letter 2024-07-23_jn_to_jdj.pdf (house.gov)
Protection of Children on the Internet
Recently many states have enacted laws to either require age verification for access to adult content websites or parental consent for children’s access to social media sites. New York now joins California in taking a different approach to seeking to protect children online.
In 2022 California enacted the California Age-Appropriate Design Code Act which requires businesses that offer online products or services to include or exclude certain design features in order to protect children in connection with their online products or services likely to be accessed by children. On September 18, 2023, a Federal District Court in California issued a preliminary injunction against the law finding that it was likely that it violated the First Amendment. The Ninth Circuit heard oral argument on California’s appeal on July 17.
California Age-Appropriate Design Act Bill Text – AB-2273 The California Age-Appropriate Design Code Act., Preliminary injunction against The California Age-Appropriate Design Code Act NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf
On June 20, 2024, New York Governor Kathy Hochul signed two pieces of legislation intended to protect children online. She said “[y]oung people across the nation are facing a mental health crisis fueled by addictive social media feeds – and New York is leading the way with a new model for addressing the crisis and protecting our kids.” She further observed that “[b]y reining in addictive feeds and shielding kids’ personal data, we’ll provide a safer digital; environment, give parents more piece of mind, and create a brighter future for young people across New York.”
The Stop Addictive Feeds Exploitation (SAFE) for Kids act (SAFE Act) targets “addictive feeds”. It prohibits users under 18 viewing addictive feeds on social media platforms without parental consent. It also prohibits platforms from sending notifications to minor from 12:00 am to 6:00 p.m.
The New York Child Data Protection Act prohibits online sites from collecting, using, sharing or selling personal data of anyone under the age of 18, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, informed consent must be provided by a parent.
SAFE Act S7694A (nysenate.gov), Child Data Protection Act s7695b (nysenate.gov)
We were fortunate to have Utah State Senator Michael McKell, the sponsor of Utah’s social media parental consent law, join us a panelist at our May 8 program.
On July 22, the Kids Online Health and Safety Task Force comprised of representatives of the White House, the Department of Health and Human Services, the Department of Commerce, the Department of Education, the Federal Trade Commission, the Department of Homeland Security and the DOJ issued a document titled Best Practices for Family and Guidance for Industry.
The report identifies key risks and benefits of online platforms and digital technologies to young people’s health, safety and privacy. It provides best practices for parents and recommended practices for companies. Recommended practices include:
The report describes a series of initiatives that various Federal agencies are undertaking in support of kids online safety. The report also calls on Congress to enact legislation to protect youth online. It states that such legislation should include prohibiting platforms from collecting personal data from youth, banning targeted advertising, and implementing measures to keep children safe from those who would use online platforms to harm, harass, and exploit them.
White House Kids Safety Task Force press release Kids Online Health and Safety Task Force Announces Recommendations and Best Practices for Safe Internet Use | HHS.gov
On August 2, the DOJ filed suit against TikTok and affiliated entities alleging that over the past five years TikTok knowingly permitted children under 13 to create regular TikTok accounts while collecting and retaining personal information for the children without notifying or obtaining consent from their parents. The DOJ further alleged that even as to accounts intended for children under 13, TikTok unlawfully collected and retained certain personal information. Moreover, the DOJ that when parents learned of their children’s accounts and requested TikTok to delete the accounts and related information, TikTok frequently failed to do so. The DOJ’s suit notes that the alleged actions occurred despite being subject to court order prohibiting the companies from violating COPPA and imposing measures to ensure compliance with the law. The suit seeks civil penalties and injunctive relief.
The DOJ’s press release asserted that TikTok’s COPPA violations have resulted in millions of children under 13 using the regular TikTok app, thereby subjecting them to extensive data collection and allowing them to interact with adult users and to access adult content. It stated that the Department is deeply concerned that TikTok has continued to collect and retain children’s personal information despite a court order barring such conduct.
DOJ press release Office of Public Affairs | Justice Department Sues TikTok and Parent Company ByteDance for Widespread Violations of Children’s Privacy Laws | United States Department of Justice, DOJ complaint dl (justice.gov)
Personal Data Control
On July 31, the Texas Attorney General’s Office announced that it had entered into a settlement agreement with Meta to stop the company’s capture and use of personal biometric data of millions of Texans without authorization as required by Texas law. The Texas AG stated that this was the first lawsuit and settlement that had been brought under the Texas Capture or Use of Biometric Identifier Act (CUBI).
According to the AG the suit involved a feature introduced in 2011 that made it easier to for users to tag photos with the names of people in the photos. The AG said that Meta automatically turned this feature on without explaining how it worked, and that for more than a decade it ran facial recognition software on faces uploaded to Facebook, capturing the facial geometry of those faces. The AG alleged that Meta did this despite knowing that CUBI forbids companies from capturing such biometric identifiers of Texans unless the company first informs the person and receives their consent to do so.
Under the settlement, Meta will pay Texas $1.4 billion over five years, which the AG described as the largest privacy settlement an Attorney General has ever obtained.
Texas AG press release Attorney General Ken Paxton Secures $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data In Largest Settlement Ever Obtained From An Action Brought By A Single State | Office of the Attorney General (texasattorneygeneral.gov), Agreed Final Judgment Final State of Texas v Meta Order 2024.pdf (texasattorneygeneral.gov), Texas AG’s February 2022 suit against Meta State of Texas v. Meta Platforms Inc..pdf (texasattorneygeneral.gov)
The effectiveness of mechanisms and metrics implemented to promote trust of AI must be empirically evaluated to determine if they actually work. Distrust of AI implicates trustworthiness and calls for a deeper understanding of stakeholder perceptions, concerns, and fears associated with AI and its specific applications. Fostering public trust of AI will require that policymakers demonstrate how they are making industry accountable to the public and their legitimate concerns.
The recent Biden White House Executive Order on artificial intelligence is a sweeping attempt to assess, monitor, regulate, and direct developments in this important area of technological growth. However, while the Order contemplates massive and thorough (arguably intrusive) collections of information, including information that will be trade secret and otherwise commercially valuable, it does not specifically address the issue of how better to ensure that government officials, employees, agents, and contractors have proper training to make sure that third-party proprietary rights in that information are preserved and the information is not “leaked” or otherwise improperly published by those acting under color of federal authority. In addition, while the Order seeks information to better assess the refusal by the U.S. Copyright Office and the U.S. Patent Office to afford protection to matter created wholly by artificial intelligence, there is a lack of specific direction on the potential need to alter these positions or focus on developing – at the federal or state levels – new forms of intellectual property protection for such matter.
Gary Rinkerman is an attorney whose practice includes intellectual property litigation, transactions, and counseling. He is an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, UK and also a Senior Fellow at the Center for Assurance Research and Engineering (‘CARE’) in the College of Engineering and Computing at George Mason University in Virginia. For those interested in ‘digital archeology,’ Mr. Rinkerman, as a Senior Investigative Attorney for the U.S. International Trade Commission, successfully argued one of the first cases in which copyright in object code was enforced. He also co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this discussion are attributable solely to the author and do not necessarily reflect the views of any firms, persons, organizations or entities with which he is affiliated or whom he represents.
Global CIO Insights: Digital Transformation with AI” digital conference hosted by Global CIO of Tashkent, Uzbekistan. Dr. J.P. Auffret was part of the discussion on “AI Implementation: Value for Business”
For more information on the conference, click here