Categories
News

The Race to Regulate the Internet — August Alert (Significant Developments)

Join CARE Mailing List

The Race to Regulate the Internet: Should States or the Federal Government Set the Rules for Websites Content, Child Protection and Personal Data Control?

August 2024

We are writing to provide an update on some significant developments since our Race to Regulate the Internet program held on May 8 – video available here:  https://youtu.be/5YBTmqFN7To

Headlines 

  1. Supreme Court returns challenge to State content moderation laws to lower courts for further development
  2. Supreme Court dismisses injunction against certain federal government contacts with social media firms based on lack of standing by plaintiffs
  3. Department of Justice Office of Inspector reviews FBI contacts with social media firms regarding foreign threats to U.S. elections in relation to First Amendment considerations
  4. Congressman calls for investigation of alleged censorship by X of Vice President Harris’ campaign
  5. New York enacts legislation to protect children online
  6. White House Task Force on Kids Online Safety makes recommendations for parents and social media companies
  7. Department of Justice lawsuit alleges TikTok collected data on children less than 13 in violation of the Children’s Online Privacy Protection Act
  8. Meta to pay $1.4 billion to settle claims regarding unauthorized capture of personal biometric data

 

State or Federal Government Regulation of, or Influence on, Website Content 

 

1. Supreme Court sends Florida and Texas website content restriction cases back for further development, but indicates that State efforts to bring balance to website editorial content decisions are not consistent with First Amendment principles

 

For most of the life of the internet, States for a variety of reasons, including early unfavorable court decisions, have generally not been inclined to attempt to impose requirements or restrictionon private party website content.  In recent years, two large States – Florida and Texas, motivated by a perception that some large websites were discriminating against the presentation of conservative viewpoints – enacted laws designed to place requirements or restrictions on website content moderation intended to balance the range of views that are presented.

 

The Eleventh Circuit Court of Appeals affirmed a district court injunction against the Florida law, finding that the law was not likely to survive a review under the First Amendment.  In contrast, the Fifth Circuit Court of Appeals reversed a district court injunction against the Texas law, based upon its determination that the law did not regulate any speech and thus did not implicate the First Amendment.

 

The Supreme Court’s review of these two laws on appeal was much anticipated.  In a July 1 ruling, Justice Kagan joined by Chief Justice Roberts and Justices Sotomayor, Kavanaugh, Barrett and Jackson ruled that the trade association plaintiffs (NetChoice and the Computer & Communications Industry Association (CCIA)) had challenged the laws on their face (as compared to as applied) and thus had the very high burden of showing that a substantial number of a law’s applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.  The Court stated that the parties and the courts below had limited their analysis to a relatively narrow scope of website activities rather than address the full range of activities the laws covered and measure the constitutional versus unconstitutional applications of the law.  Thus, the Court held that the parties had not briefed the critical issues, and the record was underdeveloped.  As a result, the Court vacated the lower court decisions and remanded the cases for further consideration.

 

The Court went on to provide guidance for this further consideration.  It indicated that it wanted to avoid having the Fifth Circuit conduct further proceedings on the basis that its First Amendment analysis was correct, where the Court found that the Fifth Circuit’s conclusions involved a serious misunderstanding of the Court’s First Amendment precedent.  The Court stated that the Fifth Circuit was wrong in concluding that the restrictions placed on a website’s selection and presentation of content do not interfere with expression.  Furthermore, it observed that the Fifth Circuit was wrong to treat as valid Texas’ interest in changing the content on the website’s feeds, in order to better balance the marketplace of ideas.  The Court further observed that the Eleventh Circuit saw the First Amendment issues in the case much as the Court does.

 

Justice Alito, joined by Justices Thomas and Gorsuch, wrote that the only binding holding in the decisions was that the plaintiffs have yet to prove that the laws they challenged are facially unconstitutional, with which they agreed.  He went on to take issue with much of the majority’s analysis.

 

We were fortunate to have representatives of CCIA (Stephanie Joyce) and NetChoice (Carl Szabo) join us as panelists at our May 8 program.

 

Moody v. NetChoice, 22-277_d18f.pdf (supremecourt.gov)

 

2. Supreme Court turns away an effort to restrict Federal government communications with websites allegedly aimed at coercing the websites to suppress content disfavored by the government based on lack of standing by plaintiffs

 

On June 26, the Supreme Court reversed a Fifth Circuit ruling granting a preliminary injunction to two States (Missouri and Louisiana) and five individuals based on allegations that Executive Branch agencies and officials had pressured certain websites to suppress protected speech in regard to COVID-19 (White House, CDC, Surgeon General) and the 2020 Election (FBI and CISA) in violation of the First Amendment.  The injunction provided that the defendants and their employees shall not coerce or significantly encourage social media companies to remove, suppress, or reduce posted content containing protected free speech.  

 

Justice Barrett wrote the Court’s opinion.  She reviewed the allegations of the individual plaintiffs that the actions of the government had caused platforms to censor their content at the behest of the government and that this would continue to occur.  The individual plaintiffs also argued that they had suffered injury to their right to listen to others.  The States asserted a sovereign interest in hearing from their citizens on social media.  In all instances the Court found that the plaintiffs had failed to demonstrate that they had standing to pursue their claims and reversed the ruling of the Fifth Circuit.

 

Justice Alito, joined by Justices Thomas and Gorsuch, dissented.  He wrote that one of the individual plaintiffs had shown that Facebook’s censorship of her content resulted at least in part from the White House’s prompting of Facebook to amend its censorship policies, and therefore had standing.  Justice Alito asserted that the government officials with potent authority had communications with Facebook that were virtual demands and that Facebook’s response showed that it felt a strong need to yield.  As a result he concluded that the individual plaintiff was likely to prevail on her claim that the White House coerced Facebook into censoring her speech.

 

Murthy v. Missouri, 23-411_3dq3.pdf (supremecourt.gov)

 

3. Department of Justice (DOJ) Office of Inspector General’s (OIG)Report on DOJ’s Sharing of Information About Foreign Malign Influence Threats to U.S. Election

 

As part of a report on the DOJ’s efforts to deal with foreign efforts to interfere with U.S. elections through information sharing inside and outside of the government, issued on July 23, the OIG touched on some of the issues presented in the Murthy case.

 

The OIG found that the DOJ did not have a comprehensive strategy guiding its approach to engagement with social media companies in regard to foreign malign influence threats (FMITs) on U.S. Elections, which the OIG believes creates a risk to the DOJ.  The OIG observed that the FBI must be mindful that its interactions with social media companies could be perceived as coercion or significant encouragement aimed at convincing social media companies to limit or exclude speech posted by it users, which may implicate First Amendment protections, noting the Fifth Circuit’s opinion in Murthy.

 

During the course of its review the OIG recommended that the DOJ develop an approach to inform the public of its procedures for transmitting notices of (FMITs) to social media companies that is protective of First Amendment rights.  The DOJ indicated that during the course of the lower court proceedings in Murthy in October 2023 it began developing a standardized approach for sharing FMITs with social media companies that appropriately accounts for First Amendment concerns.

 

This process led to the issuance of a standard operating procedure (SOP) that went into effect in February 2024.  The DOJ stated that the SOP reflects the principle that it is permissible to share FMITs with social media companies, as long as it is clear that ultimately it is up to the company whether to take any action, including removing content or barring users based on such information.

 

OIG Press release   DOJ OIG Releases Report of Investigation Regarding Alleged Unauthorized Contacts by FBI Employees with the Media and Other Persons in Advance of the 2016 Election (justice.gov), OIG Report  24-080.pdf (justice.gov)

 

4. Representative Nadler calls for Congressional investigation of alleged censorship by X related to Vice President Harris’s campaign for President

 

To date, complaints about website censorship have come largely from conservatives inspiring, among other things, the Texas and Florida laws at issue in NetChoice v. Moody.  In a change of pace, on July 23, Representative Jerrold Nadler, a Democrat from New York who is the Ranking Member on the House Judiciary Committee, wrote to Committee Chairman Jim Jordan regarding alleged censorship of Vice President Kamala Harris’ campaign handle on X.

 

Representative Nadler stated that numerous users reported that over the past two days when they tried to follow @KamalaHQ they received a “Limit Reached” message stating that the user is “unable to follow more people at this time.”  He said that the messages do not make any sense as these users are otherwise free to follow other accounts.  He stated that “[t]his suggests that X may be intentionally throttling or blocking Vice President Harris’ ability to communicate with potential voters.  If true, such action would amount to egregious censorship based on political and viewpoint discrimination – issues this Committee clearly has taken very seriously.”  Representative Nadler requested that the Committee immediately launch an investigation and request specified information from X.

 

Representative Nadler’s Letter  2024-07-23_jn_to_jdj.pdf (house.gov)

 

Protection of Children on the Internet

 

5. New York enacts legislation to protect children online

 

Recently many states have enacted laws to either require age verification for access to adult content websites or parental consent for children’s access to social media sites.  New York now joins California in taking a different approach to seeking to protect children online.

 

In 2022 California enacted the California Age-Appropriate Design Code Act which requires businesses that offer online products or services to include or exclude certain design features in order to protect children in connection with their online products or services likely to be accessed by children.  On September 18, 2023, a Federal District Court in California issued a preliminary injunction against the law finding that it was likely that it violated the First Amendment.  The Ninth Circuit heard oral argument on California’s appeal on July 17.

 

California Age-Appropriate Design Act  Bill Text – AB-2273 The California Age-Appropriate Design Code Act., Preliminary injunction against The California Age-Appropriate Design Code Act  NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf

 

On June 20, 2024, New York Governor Kathy Hochul signed two pieces of legislation intended to protect children online.  She said “[y]oung people across the nation are facing a mental health crisis fueled by addictive social media feeds – and New York is leading the way with a new model for addressing the crisis and protecting our kids.”  She further observed that “[b]y reining in addictive feeds and shielding kids’ personal data, we’ll provide a safer digital; environment, give parents more piece of mind, and create a brighter future for young people across New York.”

 

The Stop Addictive Feeds Exploitation (SAFE) for Kids act (SAFE Act) targets “addictive feeds”.  It prohibits users under 18 viewing addictive feeds on social media platforms without parental consent.  It also prohibits platforms from sending notifications to minor from 12:00 am to 6:00 p.m.   

 

The New York Child Data Protection Act prohibits online sites from collecting, using, sharing or selling personal data of anyone under the age of 18, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website.  For users under 13, informed consent must be provided by a parent.

 

SAFE Act  S7694A (nysenate.gov), Child Data Protection Act  s7695b (nysenate.gov)

 

We were fortunate to have Utah State Senator Michael McKell, the sponsor of Utah’s social media parental consent law, join us a panelist at our May 8 program.

 

6. Kids Online Health and Safety Task Force announces recommendations and best practices for safe internet use

 

On July 22, the Kids Online Health and Safety Task Force comprised of representatives of the White House, the Department of Health and Human Services, the Department of Commerce, the Department of Education, the Federal Trade Commission, the Department of Homeland Security and the DOJ issued a document titled Best Practices for Family and Guidance for Industry.

 

The report identifies key risks and benefits of online platforms and digital technologies to young people’s health, safety and privacy.  It provides best practices for parents and recommended practices for companies.  Recommended practices include:           

• Designing age-appropriate experiences for youth users.
• Making privacy protections for youth the default.
• Reduce and remove features that encourage excessive or problematic use by youth.
• Provide age-appropriate parental control tools that are easy to understand and use. A number of State children’s online protection laws include requirements for parental consent for social media use.  The report instead appears to focus on parental controls within a particular online platform, noting that they can help parents exercise more control of their children’s online experience, but cautioning that parental controls may be invasive to young people’s privacy, and commenting that a one-size-fits-all approach may not be appropriate for many families.

 

The report describes a series of initiatives that various Federal agencies are undertaking in support of kids online safety.  The report also calls on Congress to enact legislation to protect youth online.  It states that such legislation should include prohibiting platforms from collecting personal data from youth, banning targeted advertising, and implementing measures to keep children safe from those who would use online platforms to harm, harass, and exploit them.

 

White House Kids Safety Task Force press release  Kids Online Health and Safety Task Force Announces Recommendations and Best Practices for Safe Internet Use | HHS.gov

 

7. DOJ sues TikTok for alleged collection of data on children under 13 in violation of the Children’s Online Privacy Protection Act (COPPA)

 

On August 2, the DOJ filed suit against TikTok and affiliated entities alleging that over the past five years TikTok knowingly permitted children under 13 to create regular TikTok accounts while collecting and retaining personal information for the children without notifying or obtaining consent from their parents.  The DOJ further alleged that even as to accounts intended for children under 13, TikTok unlawfully collected and retained certain personal information.  Moreover, the DOJ that when parents learned of their children’s accounts and requested TikTok to delete the accounts and related information, TikTok frequently failed to do so.  The DOJ’s suit notes that the alleged actions occurred despite being subject to court order prohibiting the companies from violating COPPA and imposing measures to ensure compliance with the law.  The suit seeks civil penalties and injunctive relief.

 

The DOJ’s press release asserted that TikTok’s COPPA violations have resulted in millions of children under 13 using the regular TikTok app, thereby subjecting them to extensive data collection and allowing them to interact with adult users and to access adult content.  It stated that the Department is deeply concerned that TikTok has continued to collect and retain children’s personal information despite a court order barring such conduct.

 

DOJ press release  Office of Public Affairs | Justice Department Sues TikTok and Parent Company ByteDance for Widespread Violations of Children’s Privacy Laws | United States Department of Justice, DOJ complaint  dl (justice.gov)

 

Personal Data Control

 

8. Meta agrees to pay $1.4 billion to settle a suit by the Texas Attorney General regarding allegations that Meta was unlawfully capturing biometric data of users

 

On July 31, the Texas Attorney General’s Office announced that it had entered into a settlement agreement with Meta to stop the company’s capture and use of personal biometric data of millions of Texans without authorization as required by Texas law.  The Texas AG stated that this was the first lawsuit and settlement that had been brought under the Texas Capture or Use of Biometric Identifier Act (CUBI). 

 

According to the AG the suit involved a feature introduced in 2011 that made it easier to for users to tag photos with the names of people in the photos.  The AG said that Meta automatically turned this feature on without explaining how it worked, and that for more than a decade it ran facial recognition software on faces uploaded to Facebook, capturing the facial geometry of those faces.  The AG alleged that Meta did this despite knowing that CUBI forbids companies from capturing such biometric identifiers of Texans unless the company first informs the person and receives their consent to do so. 

Under the settlement, Meta will pay Texas $1.4 billion over five years, which the AG described as the largest privacy settlement an Attorney General has ever obtained. 

 

Texas AG press release Attorney General Ken Paxton Secures $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data In Largest Settlement Ever Obtained From An Action Brought By A Single State | Office of the Attorney General (texasattorneygeneral.gov), Agreed Final Judgment Final State of Texas v Meta Order 2024.pdf (texasattorneygeneral.gov), Texas AG’s February 2022 suit against Meta  State of Texas v. Meta Platforms Inc..pdf (texasattorneygeneral.gov)

 

 

• Jean-Pierre Auffret, George Mason University’s Director, Research Partnerships, School of Business; Director, Center for Assurance Research and Engineering (CARE), College of Engineering & Computing, jauffret@gmu.edu

 

• Thomas P. Vartanian, Executive Director of the Financial Technology & Cybersecurity Center, Author, The Unhackable Internet: How Rebuilding Cyberspace Can Create Real Security and Prevent Financial Collapse, tvartanian@fintsc.org

 

• Robert H. Ledig, Managing Director of the Financial Technology & Cybersecurity Center, rledig@fintsc.org
Categories
News

The October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: Is It Making Your Intellectual Property More Secure?

Join CARE Mailing List

ABSTRACT

The recent Biden White House Executive Order on artificial intelligence is a sweeping attempt to assess, monitor, regulate, and direct developments in this important area of technological growth.  However, while the Order contemplates massive and thorough (arguably intrusive) collections of information, including information that will be trade secret and otherwise commercially valuable, it does not specifically address the issue of how better to ensure that government officials, employees, agents, and contractors have proper training to make sure that third-party proprietary rights in that information are preserved and the information is not “leaked” or otherwise improperly published by those acting under color of federal authority.  In addition, while the Order seeks information to better assess the refusal by the U.S. Copyright Office and the U.S. Patent Office to afford protection to matter created wholly by artificial intelligence, there is a lack of specific direction on the potential need to alter these positions or focus on developing – at the federal or state levels – new forms of intellectual property protection for such matter.

AUTHOR

Gary Rinkerman is an attorney whose practice includes intellectual property litigation, transactions, and counseling. He  is an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, UK and also a Senior Fellow at the Center for Assurance Research and Engineering (‘CARE’) in the College of Engineering and Computing at George Mason University in Virginia. For those interested in ‘digital archeology,’ Mr. Rinkerman, as a Senior Investigative Attorney for the U.S. International Trade Commission, successfully argued one of the first cases in which copyright in object code was enforced. He also co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this discussion are attributable solely to the author and do not necessarily reflect the views of any firms, persons, organizations or entities with which he is affiliated or whom he represents.

Categories
News

J.P. Auffret joins Global CIO Insights conference as speaker on plenary panel on AI Implementation: Value for Business

Join CARE Mailing List

 

Global CIO Insights: Digital Transformation with AI” digital conference hosted by Global CIO of Tashkent, Uzbekistan.   Dr. J.P. Auffret was part of the discussion on “AI Implementation: Value for Business

For more information on the conference, click here

Categories
News

Selected Intellectual Property Aspects of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Join CARE Mailing List

ABSTRACT

The proliferation of AI tools in the arts, commercial design industries, and other endeavors has raised core questions regarding who or what actually supplied the alleged creative or inventive elements, if any, to the AI system’s output. In both U.S. copyright and patent law the question focuses on a case-by-case analysis as to how much of the final product evidences human “authorship” or invention. Also, creativity as well as infringement, can be located in various phases of the AI system’s creation, ingestion of training materials, management, and operation – including its output, whether affected prior to the output or after it. Issues such as liability for selecting ingestion materials or target data, as well as the potential inadvertent triggering of patent law’s bar date through use of specific AI systems, have also come to the forefront of AI’s potential to secure, forfeit, or impact claimed proprietary rights in AI-assisted creative and inventive activities. Several alternative intellectual property and unfair competition approaches that can supplement or supplant copyright and patent law principles also come into play as users of AI seek to protect the products of their efforts.

AUTHOR

Gary Rinkerman is a partner at the law firm of FisherBroyles LLP, an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, and a Senior Fellow at the Center for Assurance Research and Engineering (“CARE”) in the College of Engineering and Computing at George Mason University, Virginia. For those interested in “digital archeology,” Professor Rinkerman also successfully argued one of the first cases in which copyright in object code was enforced and he co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this article are attributable solely to the author and do not necessarily reflect the views of any persons, organizations or entities with which he is affiliated or whom he represents. The author would also like to thank J.P. Auffret, Director of CARE, for his continuing support and for his expertise in the frontier areas of Artificial Intelligence.

Categories
News

Supreme Court Case: Andy Warhol Foundation vs Goldsmith

Join CARE Mailing List

 


On May 18, 2023, the U.S. Supreme Court issued its opinion in Andy Warhol Foundation For The Visual Arts, Inc. v. Goldsmith. The case is very significant because it helps to define the scope and proper application of  “the fair use doctrine” in U.S. copyright law.   The opinion has also stirred a lot of debate about its potential applicability to AI systems, especially their training sets and outputs.  For example, in a recent panel discussion hosted by the U.S. Copyright Office, a number of participants asserted that the Warhol case mandates that the unauthorized ingestion of third-party copyrighted materials will not fall under the copyright law’s fair use doctrine.  However, the debate on this issue is robust and ongoing.

The link below is to an article in which Gary Rinkerman, a CARE Senior Fellow, is quoted regarding the general impact of the Warhol case.  Gary is also part of CARE’s study of AI systems’ Terms of Use and security implications, and he has provided “best practices” to companies dealing with AI, alone and in combination with open source ingestion materials.  

Did the Supreme Court’s Warhol Decision Further Complicate Copyright Law? Experts Weigh in on the Ruling’s Ramifications

Categories
News

Virginia State and Local Government Cybersecurity Partnering Workshop

Join CARE Mailing List

Virginia State & Local Government Cybersecurity Partnering Workshop
Join us for the Mason – NSF Virginia state and local government cybersecurity partnering workshop at the Virginia War Memorial in Richmond.

The workshop is a follow-on to the Virginia workshops we held in Richmond July 26, 2022, fall of 2017 and subsequent regional workshops.

Topics and discussion will include: ransomware update, cyber insurance, Commonwealth of Virginia update, CISA update, election security, K – 12, regional SOCs and new DHS and CISA funding opportunities amongst others.

No cost to register / attend – Lunch provided.

Who should attend: State and Local Government Administrators, IT and Cybersecurity Administrators, K-12 IT and Cybersecurity Administrators and Police and Emergency IT Managers

For more information: jauffret@gmu.edu

Hosts and Organizers: George Mason University and the National Science Foundation.

The workshop is part of the George Mason-National Science Foundation Cybersecurity City and County Cross Jurisdictional Collaboration project, having the goal of furthering U.S. city and county cybersecurity efforts by developing foundations and policies that enable and foster city and county cybersecurity partnerships.

Categories
News

West Virginia Local and State Government Cybersecurity Partnering Workshop

Join CARE Mailing List

West Virginia Local and State Government Cybersecurity Partnering Workshop

Thursday, July 20th, 2023
9:30 a.m. – 4:00 p.m.
Days Inn Conference Center
350 Days Drive
Sutton, West Virginia 26601

Please join us for the West Virginia Local and State Government Cybersecurity Partnering Workshop

When: Thursday, July 20, 2023, 9:30am – 4:00 pm, Days Inn Conference Center, Sutton, WV

Who should attend: State and local government administrators, state and local IT and cybersecurity managers and staff, government election officials, County Clerks, election experts, police and emergency IT managers and SCADA systems experts (water, electricity, etc.)

Hosted by: West Virginia Office of Technology, West Virginia Secretary of State CIO’s Office, George Mason University

Categories
News What's New

Artificial Intelligence (AI) and the New CIO Leadership Conference

Join CARE Mailing List

From Generative AI Innovation to Policy and Economic Competitiveness with Industry Focus on Government, Medicine, Financial Services and Education

Monday, May 15th and Tuesday, May 16th, 2023
9:30 a.m. – 4:30 p.m. (both days)
(Registration opens at 9:00 a.m.)
George Mason Arlington Campus, 3351 Fairfax Drive, Arlington, Virginia

Join us for the George Mason University AI and CIO Leadership Conference, where industry leaders and experts will come together to explore the latest developments and opportunities in the rapidly evolving fields of Artificial Intelligence (AI) and Computational Intelligence (CI).

Don’t miss this exciting opportunity to learn from industry leaders and experts, network with peeps and gain valuable insights into the worlds of AI and CI. (Credit: ChatGPT)

In partnership with: George Mason Center for Assurance Research & Engineering and School of Business, International Academy of CIO, MS Program in Biomedical Science Policy and Advocacy and MS Program in Biohazardous Threat Agents & Emerging Infectious Diseases, Georgetown School of Medicine and the Emerging Technology Working Group.

Topics

  • Next Generation Technology – Generative AI and the Quantum Algorithms for AI
  • AI and Education
  • The Cybersecurity Challenge of AI
  • AI and Policy and Regulation
  • AI and Medicine
  • AI and Financial Services
  • Current Legal Cases, Intellectual Property, Ethics and AI
  • AI Governance
  • AI and Economic Competitiveness

Speakers

  • Shaukat Ali Khan, Global CIO, Aga Khan University and Hospitals in Asia, Africa and United Kingdom
  • Tom Anderson, Managing Partner, DataStrategi
  • Chayakrit Asvathitanont, Ph.D., Dean, College of Innovation, Thammasat University, Bangkok, THAILAND
  • J.P. Auffret, Ph.D., Director, Center for Assurance Research & Engineering, George Mason University
  • Andrew Beklemishev, Vice President, Commonwealth of Independent States Region, IDC, KAZAKHSTAN
  • Asif Kaba, Global Director Risk & Cyber Security Practice, TATA Consultancy Services (TCS)
  • Bob Ledig, Managing Director, Financial Technology & Cybersecurity Center
  • Ryan Leirvik, CEO, Neuvik Solutions
  • Eric Luellen, Ph.D. Chief Technology and Innovation Officer, Stealth Mode Startup
  • Jeff Matsuura, Of Counsel, Author, Alliance Law Group
  • Larry Medsker, Ph.D., Research Professor, Human – Technology Collaboration, George Washington University
  • Bob Osgood, Director, Digital Forensics, George Mason University
  • Eric Osterweil, Ph.D., Associate Director, Center for Assurance Research & Engineering, George Mason University
  • Oleg Petrov, Senior Digital Development Specialist, World Bank
  • Mark Rasch, Of Counsel, Kohrman, Jackson and Krantz
  • Abhishek Ray, Ph.D., Assistant Professor, Information Systems and Operations Management, School of Business, George Mason University
  • Anthony Rhem, Ph.D., CEO / Principal Consultant, A.J. Rehm and Associates
  • Gary Rinkerman, Partner, FisherBroyles LLP
  • Alexander Ryzhov, Ph.D., Professor, Head of Chair, Russia Presidential Academy of Public Administration (RANEPA)
  • Geoffrey Siwo, Ph.D. Research Assistant Professor, Department of Learning Health Sciences, University of Michigan Medical School
  • Angelos Stavrou, Ph.D., Professor, Electrical and Computer Engineering, Virginia Tech
  • Tomoko Steen, Ph.D., Director, Biomedical Science Policy and Advocacy Program, Georgetown University
  • Frank Strickland, Partner, aiLeaders
  • Jim Sullivan, Partner, Healthcare – Consulting & Services Integration, TATA Consultancy Services (TCS)
  • Jirapon Sunkpho, Ph.D., Vice Rector, Information Technology, Thammasat University, Bangkok, THAILAND
  • Kennedy Ukelegharanya, M.S., J.D. 2023, Wilson Sonsini Goodrich & Rosati
  • Tom Vartanian, Executive Director, Financial Technology & Cybersecurity Center
  • Lin Wells II, Executive Advisor, C4I and Cyber Center, George Mason University;  fmr. CIO, U.S. Department of Defense
  • Kevin Yin, Founder and CEO, Sitscape

Agenda

Day 1 – May 15th
  • 9:30 a.m.Welcome and Introductions and Agenda Overview
  • 9:40 a.m.Conference Opening
    – Mark Rasch, Of Counsel, Kohrman, Jackson and Krantz
  • 10:10 a.m.AI and the New CIO
    – Frank Strickland, Partner, aiLeaders
  • 10:40 a.m.BREAK
  • 11:00 a.m. AI and Leadership Panel Discussion
    – Frank Strickland,
    Partner, aiLeaders,
    – Anthony Rhem, Ph.D., CEO, Rhem & Associates  
  • 11:30 a.m. Andrew Beklemishev, Vice, President, CIS, IDC
  • 12:00 noon LUNCH
  • 1:00 p.m. Current Legal Cases in AI
    – Jeff Matsuura, Of Counsel, Alliance Law
  • 1:30 p.m.AI and Data Analytics
    – Kevin Yin,
    Founder and CEO, SitScape
  • 1:55 p.m. BREAK   
  • 2:15 p.m. AI and Medicine
    – Chair, Tomoko Steen, Ph.D., Director, Biomedical Science Policy and Advocacy program;  professor Microbiology and Immunology, Georgetown University Medical Center
    – Eric Luellen, Ph.D. Chief Technology and Innovation Officer, Stealth Mode Startup
    – Geoffrey Siwo, Ph.D. Research Assistant Professor, Department of Learning Health Sciences, University of Michigan Medical School
    – Kennedy Ukelegharanya, M.S., J.D. 2023, Wilson Sonsini Goodrich & Rosati
    – J.P. Auffret, Ph.D., Director, Center for Assurance Research & Engineering, George Mason University
  • 4:00 p.m. AI and Cybersecurity
    Eric Osterweil, Ph.D., Assistant Professor, Computer Science, George Mason,
    – Ryan
    Leirvik, Neuvik Solutions, Ph.D., George Mason
  • 4:55 p.m.CLOSING
Day 2 – May 16th
  • 9:30 a.m.WELCOME
  • 9:30 a.m.Day 2 Opening Talk
    Shaukat Ali Khan, Global CIO, Aga Khan University and Hospitals in Asia, Africa and United Kingdom
  • 9:55 a.m.AI and Medical Devices and Cybersecurity
    – J
    ames Sullivan, Partner, Life Sciences / Healthcare – Consulting Services Integration, Tata Consultancy Services and Asif Kaba, Director, Risk & Cyber Strategy Consulting, Tata Consultancy Services
  • 10:50 a.m. BREAK  
  • 11:00 a.m.AI and Law Enforcement
    – Bob Osgood, Director, Digital Forensics, George Mason University
  • 11:25 a.m. Oleg Petrov, Senior Digital Development Specialist, World Bank
  • 11:50 a.m. AI and Financial Services
    – Tom Vartanian, Executive Director, Financial Technology & Cybersecurity Center and Bob Ledig, Managing Director, Financial Technology & Cybersecurity Center
  • 12:50 p.m. LUNCH  
  • 1:45 p.m.AI and Education
    Jirapon Sunkpho, Vice Rector for Information Technology, Thammasat University, THAILAND
    – Alexander
    Ryzhov, Professor, Department Head, School of IT Management, RANEPA, RUSSIA
  • 2:40 p.m.AI and Criminal Networks
    – Abhishek Ray, Ph.D., Assistant Professor, Information
    Systems and Operations Management, George Mason University
  • 3:05 p.m.BREAK
  • 3:10 p.m. AI and Ethics
    – Larry Medsker, Ph.D., Research Professor, Human – Technology Collaboration Lab, George Washington University;  Founding Co-editor AI and Ethics Journal (Springer); and Anthony Rhem, Ph.D., CEO, Rhem& Associates  
  • 4:00 p.m.AI and Intellectual Property
    – Gary Rinkerman, Partner, FisherBroyles
  • 4:25 p.m.AI and Quantum Computing
    – J.P.
    Auffet, Ph.D., George Mason
  • 4:45 p.m.CLOSING

 

For questions regarding the Conference, please contact Dr. J.P. Auffret

Please RSVP: https://AICIOMay16.eventbrite.com

featured image (at top) courtesy of Midjourney AI

Hosted by:
About the Center for Assurance Research & Engineering (CARE) at College of Engineering and Computing at George Mason University

Mason’s College of Engineering and Computing hosts the Center for Assurance Research & Engineering Center (CARE). CARE’s multidisciplinary approach to cybersecurity encompasses the fields of technology, policy, business and leadership. Through partnerships with government and private industry, innovative research is translated into practices and policies used in real-world settings. Research includes security for distributed systems, mobile apps/devices, industrial control systems, and new technologies such as networked medical devices, as well as policies development for securing critical infrastructure and guidance for cybersecurity leadership/governance. For more information, please visit care.gmu.edu.

About the International Academy of CIO (IAC)

The IAC or International Academy of CIO was founded in 2006 in Japan by co-founders including Japan, USA, Indonesia, Philippines, Switzerland and Thailand. The IAC members, partnerships and alliances now span all regions with economies such as China, Cambodia, Netherlands, India, Korea, Laos, Hong Kong, Macao, Peru, Singapore, South Africa, Nigeria, Taiwan, United Kingdom, Vietnam, Italy, Russia, Kazakhstan and Uzbekistan. The IAC has active participation from more than 50 countries and partners with NGOs and multilateral organizations including APEC, OECD and ITU. For more information, please go to: www.iacio.org.

Categories
News

Artificial Intelligence and Evolving Issues Under U.S. Copyright and Patent Law

Join CARE Mailing List

ABSTRACT

The proliferation of AI tools in the arts, commercial design industries, and other endeavors has raised core questions regarding who or what actually supplied the alleged creative or inventive elements, if any, to the AI system’s output. In both U.S. copyright and patent law the question focuses on a case-by-case analysis as to how much of the final product evidences human “authorship” or invention. Also, creativity as well as infringement, can be located in various phases of the AI system’s creation, ingestion of training materials, management, and operation – including its output, whether affected prior to the output or after it. Issues such as liability for selecting ingestion materials or target data, as well as the potential inadvertent triggering of patent law’s bar date through use of specific AI systems, have also come to the forefront of AI’s potential to secure, forfeit, or impact claimed proprietary rights in AI-assisted creative and inventive activities. Several alternative intellectual property and unfair competition approaches that can supplement or supplant copyright and patent law principles also come into play as users of AI seek to protect the products of their efforts.

AUTHOR

Gary Rinkerman is a partner at the law firm of FisherBroyles LLP, an Honorary Professor of U.S. Intellectual Property Law at Queen Mary University in London, and a Senior Fellow at the Center for Assurance Research and Engineering (“CARE”) in the College of Engineering and Computing at George Mason University, Virginia. For those interested in “digital archeology,” Professor Rinkerman also successfully argued one of the first cases in which copyright in object code was enforced and he co-founded and served as Editor-in-Chief for Computer Law Reporter, one of the first legal publications (in the 1980s) to focus exclusively on law and computer technologies. This article should not be considered legal advice. The presentation of facts and the opinions expressed in this article are attributable solely to the author and do not necessarily reflect the views of any persons, organizations or entities with which he is affiliated or whom he represents. The author would also like to thank J.P. Auffret, Director of CARE, for his continuing support and for his expertise in the frontier areas of Artificial Intelligence.

Categories
News

Escalating Cyber Threats to U.S. Financial System Conference – In person and virtual

Join CARE Mailing List

Escalating Cyber Threats to the U.S Financial System: Time to Think Outside the Box Conference – Offered in person and virtually

May 4, 2023, 1:00 p.m. – 5:30 p.m. (Eastern)

In-Person, Van Metre Hall, George Mason University, Arlington, Virginia and via Zoom Webinar

Join Mason’s Center for Assurance Research and Engineering (CARE), the Financial Technology & Cybersecurity Center (Center), Mason’s School of Business, and a host of experts for an executive roundtable discussion on the cybersecurity challenges facing the financial services sector and the solutions that should be pursued.

The increasing number of cyber incursions demonstrates the vulnerabilities of digital technologies and an internet that was not created to be a secure network. There are options for overcoming these increasing vulnerabilities and protecting the financial system. Hear experts discuss current and upcoming issues in cybersecurity and the financial system including:

Agenda

  • Financial system innovation and challenges of cybersecurity
  • Geopolitics and national motivations for financial system innovation and development
  • Crime and criminal enterprises and the financial system
  • Role of multilateral institutions such as the IMF and World Bank
  • New web concepts to decentralize and segregate financial system critical infrastructure
  • Government initiatives to work with private industry
  • More secure software, hardware and systems
  • The path to global consensus
  • The cybersecurity alternatives that financial services companies are developing

Our faculty will be moderated by cybersecurity experts:

  • Dr. Jean-Pierre Auffret, George Mason University’s Director, Research Partnerships, School of Business; Director, Center for Assurance Research and Engineering (CARE), College of Engineering & Computing
  • Thomas P. Vartanian, Executive Director of the Financial Technology & Cybersecurity Center

Dr. Ajay Vinzé, Dean of Mason’s School of Business, will provide opening remarks for the program
Brian Peretti, Director, Domestic and International Cyber Policy, Office of Cybersecurity and Critical Infrastructure Protection, U.S. Department of Treasury, will make Keynote Remarks

Cybersecurity Experts include:

Stewart Baker, Of Counsel, Steptoe
• Emily Beam, Senior Vice President, Cyber Risk Institute
• John Carlson, Vice President, Cybersecurity Regulation and Resilience
American Bankers Association
• Dr. Robert Coles, Head of Security Strategy, D S Smith, Former Chief Information Security Officer – Merrill Lynch, National Grid, GlaxoSmithKline
• Steve Crocker, CEO and co-founder, Shinkuro
• James Dever, Cofounder and Principal, Lockhaven Solutions LLC; formerly U.S. Air Force Professor of Cyber Warfare
• John Geiringer, Barack Ferrazzano Kirshbaum & Nagelberg LLP
• Adam Golodner, Managing Partner, AMG Global Cyber Law, PLLC, CEO, Vortex Strategic Consulting, Co-Chair, Trusted Future
• Murray Kenyon, Cybersecurity Partnership Executive, U.S. Bank
• Jenny Menna, Vice President, Threat Management and Response, Humana
• Lisa Quest, Partner, Co-Head of the Public Sector and Policy Practice, Europe, Oliver Wyman, Co-author, Digital Trust: How Banks Can Secure Our Digital Identity;
Joint Report with the International Banking Federation
• Craig Schwartz, Managing Director, fTLD Registry -.Bank
• Marilyn Smith, IT Consultant, Former Chief Information Officer – George Mason University, Massachusetts Institute of Technology
• Scott Volmar, CEO, Intercomputer Network Corp

More event information:
 There is no charge to attend the program either in person or via webinar.
 Please feel free to share this invitation with your colleagues.

Hosted by

George Mason University‘s Center for Assurance Research and Engineering, Financial Technology & Cybersecurity Center and the School of Business

Mason’s College of Engineering and Computing hosts the Center for Assurance Research & Engineering Center (CARE). CARE’s multidisciplinary approach to cybersecurity encompasses the fields of technology, policy, business and leadership. Through partnerships with government and private industry, innovative research is translated into practices and policies used in real-world settings. Research includes security for distributed systems, mobile apps/devices, industrial control systems, and new technologies such as networked medical devices, as well as policies development for securing critical infrastructure and guidance for cybersecurity leadership/governance. For more information, please visit care.gmu.edu.

The Financial Technology & Cybersecurity Center is a nonprofit organization that brings together financial services professionals, regulators, trade association representatives, consumer group representatives, counsel and advisors to discuss, debate, and advocate in regard to financial technology and cybersecurity issues and their regulation. For more information about the Center and to sign up to hear about future events and projects visit www.fintsc.org.

For questions regarding the program, please contact Dr. JP Auffret at jauffret@gmu.edu or Phone at 703-993-5641 or Robert Ledig, FTCC Managing Director at rledig@fintsc.org.