Center for Assurance Research and Engineering

Future Workforce

Join CARE Mailing List

Ethical and forward-thinking policies need to address the impact of automation and AI on the future workforce. Concerns include job displacement, skill gaps, and the need for continuous workforce re-skilling to keep pace with technological advancements. Ensuring that AI-driven automation enhances rather than replaces human labor requires policies that promote equitable access to retraining programs and foster collaboration between industry, government, and educational institutions. Experts warn against unchecked technological adoption that prioritizes efficiency over worker well-being, advocating instead for policies that protect labor rights, prevent bias in AI-driven hiring and performance assessments, and create new opportunities for meaningful employment. A proactive, ethical approach to workforce transformation is essential to ensuring that technology benefits society as a whole rather than deepening economic inequalities.

Artificial Intelligence

Join CARE Mailing List

 
Technology experts recognize that Artificial Intelligence presents a complex array of ethical and policy challenges that professionals in all fields need to consider. One primary concern is the potential for bias in AI systems. Because these systems learn from data, any historical biases in that data—whether related to race, gender, or socioeconomic status—can be perpetuated or even amplified, affecting decisions in areas like hiring, law enforcement, or healthcare. Another issue is privacy, especially as AI systems often rely on vast amounts of personal data. The risk here is not only data breaches but also the possibility of invasive surveillance or misuse by bad actors. Furthermore, AI’s increasing capability to automate tasks raises questions about job displacement and economic inequality, as certain roles may become obsolete, while others require entirely new skill sets. From a policy perspective, clear regulations need to ensure AI is developed and deployed responsibly—protecting individuals' rights, preventing harm, and maintaining fairness. Policies must also be flexible enough to keep pace with the fast-evolving nature of the technology. Addressing these issues will require collaboration across industries and careful thought about how AI can serve the common good without exacerbating existing societal problems.

J.P. Auffret gives talk in Thailand at AI Strategic Transformation: Principles and Practices for CIOs international conference on “Driving AI Innovation for Sustainable Growth”

Join CARE Mailing List

 

The conference was a collaborative effort between the College of Innovation at Thammasat University (CITU) and the International Academy of CIO (IACIO). The event served as a platform for fostering partnerships and knowledge exchange, focusing on harnessing AI to drive sustainable development and economic growth in Thailand.

Dr Jean-Pierre Auffret, Chairman of IACIO, spoke on the evolving role of Chief Information Officers (CIOs) in the digital age. He underscored the importance of redefining the CIO’s role to meet the demands of today’s fast-paced technological advancements. He also stressed the need for ethical AI deployment, advocating for transparency and responsible use of AI within organisational frameworks.

“Our goal is to promote the use of AI that not only boosts efficiency but also aligns with ethical standards and social responsibility,” Dr Auffret stated.

The IACIO 2024 conference highlighted the importance of international collaboration in AI development. The event brought together experts from over 50 countries, fostering the exchange of ideas and best practices on integrating AI. Discussions emphasised balancing innovation with ethics to ensure responsible and sustainable AI deployment.

AI and Work (STEP)

Join CARE Mailing List

 

Generative AI and the future of work in America, McKinsey Global Institute Report, July 26, 2023

AI and Education (STEP)

Join CARE Mailing List

Technology experts emphasize the ethical and policy challenges of integrating technology into education, advocating for curricula that prepare students for a digital future while addressing issues of equity, access, and digital literacy. Concerns include the ethical use of AI-driven learning tools, data privacy in student tracking systems, and the digital divide that can exacerbate educational inequalities. Ensuring that students receive relevant, up-to-date instruction on technology—covering topics like cybersecurity, AI bias, and responsible AI use—is crucial for fostering informed and critical users of emerging technologies. Policymakers and educators need to develop policies that promote ethical technology use, ensure teacher training keeps pace with advancements, and incorporate diverse perspectives to create inclusive, future-ready learning environments.

Principles for the Development, Deployment, and Use of Generative AI Technologies, ACM Technology Policy Council (June 27, 2023)

Generative Artificial Intelligence (AI) is a broad term used to describe computing techniques and tools that can be used to create new content such as text, speech and audio, images and video, and computer code. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of generative AI systems, the speed of their evolution, broad application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.

AI Regulation (STEP)

Join CARE Mailing List

The ethical and policy challenges of AI regulation include the need for balanced oversight that fosters innovation while mitigating harm. Regulations should be adaptable to the rapid evolution of AI, avoiding overly rigid rules that stifle progress while ensuring accountability and transparency. Key concerns include bias in AI decision-making, data privacy, and the societal impact of automation. Policies should encourage interdisciplinary collaboration and integrate insights from ethics, law, and social sciences to create frameworks that are both practical and forward-looking. Regulations should avoid reactive, one-size-fits-all approaches and instead call for nuanced, evidence-based regulations that consider the diverse applications and risks of AI technologies.

CARE Race to Regulate the Internet program was held on May 8
(click image below to view on YouTube)

Should States or the Federal Government Set the Rules for Websites Content, Child Protection and Personal Data Control?

See headlines and details at the CARE Race to Regulate the Internet update.


Statement in Support of Mandatory Comprehensive Digital Accessibility Regulations

US Technology Policy Committee (May 31, 2024)

Content Provenance in the Age of AI (STEP)

Join CARE Mailing List

FACTS

INFORMATION

DATA

VALIDITY

Content Provenance refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document).

Technical specifications were released in 2022 by the Coalition for Content Provenance and Authenticity (C2PA Content Credentials).

Data provenance refers to a documented trail that accounts for the origin of a piece of data and where it has moved from to where it is presently. The purpose of data provenance is to tell developers the origin, changes to, and details supporting the confidence or validity of data. The concept of provenance guarantees that data creators are transparent about their work -- where it came from and the chain of information where data can be tracked as data is used and adapted for their own purposes.

In the context of technology, particularly in areas like data and supply chains, provenance refers to the tracking and verification of the origin and journey of data, goods, or materials. Standards are critical for ensuring transparency and accountability. However, maintaining accurate provenance can raise concerns about privacy and consent, particularly when personal or sensitive data is involved. For example, how do we balance the need to trace data usage or supply chain practices without infringing on individuals' rights or exposing confidential information? Additionally, there are ethical considerations regarding the accuracy and integrity of provenance records—misleading or incomplete records can perpetuate fraud or misrepresentation. From a policy standpoint, there is a need for clear standards around how provenance data is collected, shared, and verified, especially in global supply chains or when dealing with cross-border data. As provenance tracking becomes more common, policies must ensure that systems are robust, transparent, and respect privacy rights while also preventing misuse, such as counterfeiting or unethical sourcing. The goal is to create frameworks that encourage trust and accountability while addressing potential risks and gaps.

Cybersecurity (STEP)

Join CARE Mailing List

 

Statement On Mass Cybersecurity Incidents Likely to Recur, US Technology Policy Committee (August 11, 2024)

Cybersecurity ethics and policy issues are increasingly critical as digital threats become more sophisticated. One major concern is the balance between security and privacy—while cybersecurity measures are necessary to protect individuals and organizations from cyberattacks, they can sometimes infringe on privacy rights. For instance, the use of surveillance tools, data monitoring, or even mass data collection can raise ethical questions about the extent to which personal information is exposed. Another issue is the responsibility of organizations to safeguard sensitive data; the ethical implications of data breaches or negligence in securing systems can be profound, especially when it comes to financial, healthcare, or personal data. Robust, clear regulations need to set standards for cybersecurity practices while fostering innovation. Regulations must also be adaptable to the fast-changing nature of cyber threats, ensuring that organizations stay ahead of attackers. Ethical challenges also arise when considering how to defend against cyber threats while avoiding collateral damage such as unintended consequences from retaliatory cyber actions. Professionals across industries must understand these complexities to ensure cybersecurity measures are both effective and aligned with broader societal values.

The Race to Regulate the Internet — August Alert (Significant Developments)

Join CARE Mailing List

The Race to Regulate the Internet: Should States or the Federal Government Set the Rules for Websites Content, Child Protection and Personal Data Control?

August 2024

We are writing to provide an update on some significant developments since our Race to Regulate the Internet program held on May 8 – video available here:  https://youtu.be/5YBTmqFN7To

Headlines 

  1. Supreme Court returns challenge to State content moderation laws to lower courts for further development
  2. Supreme Court dismisses injunction against certain federal government contacts with social media firms based on lack of standing by plaintiffs
  3. Department of Justice Office of Inspector reviews FBI contacts with social media firms regarding foreign threats to U.S. elections in relation to First Amendment considerations
  4. Congressman calls for investigation of alleged censorship by X of Vice President Harris’ campaign
  5. New York enacts legislation to protect children online
  6. White House Task Force on Kids Online Safety makes recommendations for parents and social media companies
  7. Department of Justice lawsuit alleges TikTok collected data on children less than 13 in violation of the Children’s Online Privacy Protection Act
  8. Meta to pay $1.4 billion to settle claims regarding unauthorized capture of personal biometric data

 

State or Federal Government Regulation of, or Influence on, Website Content 

 

1. Supreme Court sends Florida and Texas website content restriction cases back for further development, but indicates that State efforts to bring balance to website editorial content decisions are not consistent with First Amendment principles

 

For most of the life of the internet, States for a variety of reasons, including early unfavorable court decisions, have generally not been inclined to attempt to impose requirements or restrictionon private party website content.  In recent years, two large States – Florida and Texas, motivated by a perception that some large websites were discriminating against the presentation of conservative viewpoints – enacted laws designed to place requirements or restrictions on website content moderation intended to balance the range of views that are presented.

 

The Eleventh Circuit Court of Appeals affirmed a district court injunction against the Florida law, finding that the law was not likely to survive a review under the First Amendment.  In contrast, the Fifth Circuit Court of Appeals reversed a district court injunction against the Texas law, based upon its determination that the law did not regulate any speech and thus did not implicate the First Amendment.

 

The Supreme Court’s review of these two laws on appeal was much anticipated.  In a July 1 ruling, Justice Kagan joined by Chief Justice Roberts and Justices Sotomayor, Kavanaugh, Barrett and Jackson ruled that the trade association plaintiffs (NetChoice and the Computer & Communications Industry Association (CCIA)) had challenged the laws on their face (as compared to as applied) and thus had the very high burden of showing that a substantial number of a law’s applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.  The Court stated that the parties and the courts below had limited their analysis to a relatively narrow scope of website activities rather than address the full range of activities the laws covered and measure the constitutional versus unconstitutional applications of the law.  Thus, the Court held that the parties had not briefed the critical issues, and the record was underdeveloped.  As a result, the Court vacated the lower court decisions and remanded the cases for further consideration.

 

The Court went on to provide guidance for this further consideration.  It indicated that it wanted to avoid having the Fifth Circuit conduct further proceedings on the basis that its First Amendment analysis was correct, where the Court found that the Fifth Circuit’s conclusions involved a serious misunderstanding of the Court’s First Amendment precedent.  The Court stated that the Fifth Circuit was wrong in concluding that the restrictions placed on a website’s selection and presentation of content do not interfere with expression.  Furthermore, it observed that the Fifth Circuit was wrong to treat as valid Texas’ interest in changing the content on the website’s feeds, in order to better balance the marketplace of ideas.  The Court further observed that the Eleventh Circuit saw the First Amendment issues in the case much as the Court does.

 

Justice Alito, joined by Justices Thomas and Gorsuch, wrote that the only binding holding in the decisions was that the plaintiffs have yet to prove that the laws they challenged are facially unconstitutional, with which they agreed.  He went on to take issue with much of the majority’s analysis.

 

We were fortunate to have representatives of CCIA (Stephanie Joyce) and NetChoice (Carl Szabo) join us as panelists at our May 8 program.

 

Moody v. NetChoice, 22-277_d18f.pdf (supremecourt.gov)

 

2. Supreme Court turns away an effort to restrict Federal government communications with websites allegedly aimed at coercing the websites to suppress content disfavored by the government based on lack of standing by plaintiffs

 

On June 26, the Supreme Court reversed a Fifth Circuit ruling granting a preliminary injunction to two States (Missouri and Louisiana) and five individuals based on allegations that Executive Branch agencies and officials had pressured certain websites to suppress protected speech in regard to COVID-19 (White House, CDC, Surgeon General) and the 2020 Election (FBI and CISA) in violation of the First Amendment.  The injunction provided that the defendants and their employees shall not coerce or significantly encourage social media companies to remove, suppress, or reduce posted content containing protected free speech.  

 

Justice Barrett wrote the Court’s opinion.  She reviewed the allegations of the individual plaintiffs that the actions of the government had caused platforms to censor their content at the behest of the government and that this would continue to occur.  The individual plaintiffs also argued that they had suffered injury to their right to listen to others.  The States asserted a sovereign interest in hearing from their citizens on social media.  In all instances the Court found that the plaintiffs had failed to demonstrate that they had standing to pursue their claims and reversed the ruling of the Fifth Circuit.

 

Justice Alito, joined by Justices Thomas and Gorsuch, dissented.  He wrote that one of the individual plaintiffs had shown that Facebook’s censorship of her content resulted at least in part from the White House’s prompting of Facebook to amend its censorship policies, and therefore had standing.  Justice Alito asserted that the government officials with potent authority had communications with Facebook that were virtual demands and that Facebook’s response showed that it felt a strong need to yield.  As a result he concluded that the individual plaintiff was likely to prevail on her claim that the White House coerced Facebook into censoring her speech.

 

Murthy v. Missouri, 23-411_3dq3.pdf (supremecourt.gov)

 

3. Department of Justice (DOJ) Office of Inspector General’s (OIG)Report on DOJ’s Sharing of Information About Foreign Malign Influence Threats to U.S. Election

 

As part of a report on the DOJ’s efforts to deal with foreign efforts to interfere with U.S. elections through information sharing inside and outside of the government, issued on July 23, the OIG touched on some of the issues presented in the Murthy case.

 

The OIG found that the DOJ did not have a comprehensive strategy guiding its approach to engagement with social media companies in regard to foreign malign influence threats (FMITs) on U.S. Elections, which the OIG believes creates a risk to the DOJ.  The OIG observed that the FBI must be mindful that its interactions with social media companies could be perceived as coercion or significant encouragement aimed at convincing social media companies to limit or exclude speech posted by it users, which may implicate First Amendment protections, noting the Fifth Circuit’s opinion in Murthy.

 

During the course of its review the OIG recommended that the DOJ develop an approach to inform the public of its procedures for transmitting notices of (FMITs) to social media companies that is protective of First Amendment rights.  The DOJ indicated that during the course of the lower court proceedings in Murthy in October 2023 it began developing a standardized approach for sharing FMITs with social media companies that appropriately accounts for First Amendment concerns.

 

This process led to the issuance of a standard operating procedure (SOP) that went into effect in February 2024.  The DOJ stated that the SOP reflects the principle that it is permissible to share FMITs with social media companies, as long as it is clear that ultimately it is up to the company whether to take any action, including removing content or barring users based on such information.

 

OIG Press release   DOJ OIG Releases Report of Investigation Regarding Alleged Unauthorized Contacts by FBI Employees with the Media and Other Persons in Advance of the 2016 Election (justice.gov), OIG Report  24-080.pdf (justice.gov)

 

4. Representative Nadler calls for Congressional investigation of alleged censorship by X related to Vice President Harris’s campaign for President

 

To date, complaints about website censorship have come largely from conservatives inspiring, among other things, the Texas and Florida laws at issue in NetChoice v. Moody.  In a change of pace, on July 23, Representative Jerrold Nadler, a Democrat from New York who is the Ranking Member on the House Judiciary Committee, wrote to Committee Chairman Jim Jordan regarding alleged censorship of Vice President Kamala Harris’ campaign handle on X.

 

Representative Nadler stated that numerous users reported that over the past two days when they tried to follow @KamalaHQ they received a “Limit Reached” message stating that the user is “unable to follow more people at this time.”  He said that the messages do not make any sense as these users are otherwise free to follow other accounts.  He stated that “[t]his suggests that X may be intentionally throttling or blocking Vice President Harris’ ability to communicate with potential voters.  If true, such action would amount to egregious censorship based on political and viewpoint discrimination – issues this Committee clearly has taken very seriously.”  Representative Nadler requested that the Committee immediately launch an investigation and request specified information from X.

 

Representative Nadler’s Letter  2024-07-23_jn_to_jdj.pdf (house.gov)

 

Protection of Children on the Internet

 

5. New York enacts legislation to protect children online

 

Recently many states have enacted laws to either require age verification for access to adult content websites or parental consent for children’s access to social media sites.  New York now joins California in taking a different approach to seeking to protect children online.

 

In 2022 California enacted the California Age-Appropriate Design Code Act which requires businesses that offer online products or services to include or exclude certain design features in order to protect children in connection with their online products or services likely to be accessed by children.  On September 18, 2023, a Federal District Court in California issued a preliminary injunction against the law finding that it was likely that it violated the First Amendment.  The Ninth Circuit heard oral argument on California’s appeal on July 17.

 

California Age-Appropriate Design Act  Bill Text – AB-2273 The California Age-Appropriate Design Code Act., Preliminary injunction against The California Age-Appropriate Design Code Act  NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf

 

On June 20, 2024, New York Governor Kathy Hochul signed two pieces of legislation intended to protect children online.  She said “[y]oung people across the nation are facing a mental health crisis fueled by addictive social media feeds – and New York is leading the way with a new model for addressing the crisis and protecting our kids.”  She further observed that “[b]y reining in addictive feeds and shielding kids’ personal data, we’ll provide a safer digital; environment, give parents more piece of mind, and create a brighter future for young people across New York.”

 

The Stop Addictive Feeds Exploitation (SAFE) for Kids act (SAFE Act) targets “addictive feeds”.  It prohibits users under 18 viewing addictive feeds on social media platforms without parental consent.  It also prohibits platforms from sending notifications to minor from 12:00 am to 6:00 p.m.   

 

The New York Child Data Protection Act prohibits online sites from collecting, using, sharing or selling personal data of anyone under the age of 18, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website.  For users under 13, informed consent must be provided by a parent.

 

SAFE Act  S7694A (nysenate.gov), Child Data Protection Act  s7695b (nysenate.gov)

 

We were fortunate to have Utah State Senator Michael McKell, the sponsor of Utah’s social media parental consent law, join us a panelist at our May 8 program.

 

6. Kids Online Health and Safety Task Force announces recommendations and best practices for safe internet use

 

On July 22, the Kids Online Health and Safety Task Force comprised of representatives of the White House, the Department of Health and Human Services, the Department of Commerce, the Department of Education, the Federal Trade Commission, the Department of Homeland Security and the DOJ issued a document titled Best Practices for Family and Guidance for Industry.

 

The report identifies key risks and benefits of online platforms and digital technologies to young people’s health, safety and privacy.  It provides best practices for parents and recommended practices for companies.  Recommended practices include:           

• Designing age-appropriate experiences for youth users.
• Making privacy protections for youth the default.
• Reduce and remove features that encourage excessive or problematic use by youth.
• Provide age-appropriate parental control tools that are easy to understand and use. A number of State children’s online protection laws include requirements for parental consent for social media use.  The report instead appears to focus on parental controls within a particular online platform, noting that they can help parents exercise more control of their children’s online experience, but cautioning that parental controls may be invasive to young people’s privacy, and commenting that a one-size-fits-all approach may not be appropriate for many families.

 

The report describes a series of initiatives that various Federal agencies are undertaking in support of kids online safety.  The report also calls on Congress to enact legislation to protect youth online.  It states that such legislation should include prohibiting platforms from collecting personal data from youth, banning targeted advertising, and implementing measures to keep children safe from those who would use online platforms to harm, harass, and exploit them.

 

White House Kids Safety Task Force press release  Kids Online Health and Safety Task Force Announces Recommendations and Best Practices for Safe Internet Use | HHS.gov

 

7. DOJ sues TikTok for alleged collection of data on children under 13 in violation of the Children’s Online Privacy Protection Act (COPPA)

 

On August 2, the DOJ filed suit against TikTok and affiliated entities alleging that over the past five years TikTok knowingly permitted children under 13 to create regular TikTok accounts while collecting and retaining personal information for the children without notifying or obtaining consent from their parents.  The DOJ further alleged that even as to accounts intended for children under 13, TikTok unlawfully collected and retained certain personal information.  Moreover, the DOJ that when parents learned of their children’s accounts and requested TikTok to delete the accounts and related information, TikTok frequently failed to do so.  The DOJ’s suit notes that the alleged actions occurred despite being subject to court order prohibiting the companies from violating COPPA and imposing measures to ensure compliance with the law.  The suit seeks civil penalties and injunctive relief.

 

The DOJ’s press release asserted that TikTok’s COPPA violations have resulted in millions of children under 13 using the regular TikTok app, thereby subjecting them to extensive data collection and allowing them to interact with adult users and to access adult content.  It stated that the Department is deeply concerned that TikTok has continued to collect and retain children’s personal information despite a court order barring such conduct.

 

DOJ press release  Office of Public Affairs | Justice Department Sues TikTok and Parent Company ByteDance for Widespread Violations of Children’s Privacy Laws | United States Department of Justice, DOJ complaint  dl (justice.gov)

 

Personal Data Control

 

8. Meta agrees to pay $1.4 billion to settle a suit by the Texas Attorney General regarding allegations that Meta was unlawfully capturing biometric data of users

 

On July 31, the Texas Attorney General’s Office announced that it had entered into a settlement agreement with Meta to stop the company’s capture and use of personal biometric data of millions of Texans without authorization as required by Texas law.  The Texas AG stated that this was the first lawsuit and settlement that had been brought under the Texas Capture or Use of Biometric Identifier Act (CUBI). 

 

According to the AG the suit involved a feature introduced in 2011 that made it easier to for users to tag photos with the names of people in the photos.  The AG said that Meta automatically turned this feature on without explaining how it worked, and that for more than a decade it ran facial recognition software on faces uploaded to Facebook, capturing the facial geometry of those faces.  The AG alleged that Meta did this despite knowing that CUBI forbids companies from capturing such biometric identifiers of Texans unless the company first informs the person and receives their consent to do so. 

Under the settlement, Meta will pay Texas $1.4 billion over five years, which the AG described as the largest privacy settlement an Attorney General has ever obtained. 

 

Texas AG press release Attorney General Ken Paxton Secures $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data In Largest Settlement Ever Obtained From An Action Brought By A Single State | Office of the Attorney General (texasattorneygeneral.gov), Agreed Final Judgment Final State of Texas v Meta Order 2024.pdf (texasattorneygeneral.gov), Texas AG’s February 2022 suit against Meta  State of Texas v. Meta Platforms Inc..pdf (texasattorneygeneral.gov)

 

 

• Jean-Pierre Auffret, George Mason University’s Director, Research Partnerships, School of Business; Director, Center for Assurance Research and Engineering (CARE), College of Engineering & Computing, [email protected]

 

• Thomas P. Vartanian, Executive Director of the Financial Technology & Cybersecurity Center, Author, The Unhackable Internet: How Rebuilding Cyberspace Can Create Real Security and Prevent Financial Collapse, [email protected]

 

• Robert H. Ledig, Managing Director of the Financial Technology & Cybersecurity Center, [email protected]

Trustworthy AI

Join CARE Mailing List

 

ACM TechBrief on Trusted AI

The effectiveness of mechanisms and metrics implemented to promote trust of AI must be empirically evaluated to determine if they actually work. Distrust of AI implicates trustworthiness and calls for a deeper understanding of stakeholder perceptions, concerns, and fears associated with AI and its specific applications. Fostering public trust of AI will require that policymakers demonstrate how they are making industry accountable to the public and their legitimate concerns.