Center for Assurance Research and Engineering

Technology and Education

Join CARE Mailing List

Technology experts emphasize the ethical and policy challenges of integrating technology into education, advocating for curricula that prepare students for a digital future while addressing issues of equity, access, and digital literacy. Concerns include the ethical use of AI-driven learning tools, data privacy in student tracking systems, and the digital divide that can exacerbate educational inequalities. Ensuring that students receive relevant, up-to-date instruction on technology—covering topics like cybersecurity, AI bias, and responsible AI use—is crucial for fostering informed and critical users of emerging technologies. Policymakers and educators need to develop policies that promote ethical technology use, ensure teacher training keeps pace with advancements, and incorporate diverse perspectives to create inclusive, future-ready learning environments.

Future Workforce

Join CARE Mailing List

Ethical and forward-thinking policies need to address the impact of automation and AI on the future workforce. Concerns include job displacement, skill gaps, and the need for continuous workforce re-skilling to keep pace with technological advancements. Ensuring that AI-driven automation enhances rather than replaces human labor requires policies that promote equitable access to retraining programs and foster collaboration between industry, government, and educational institutions. Experts warn against unchecked technological adoption that prioritizes efficiency over worker well-being, advocating instead for policies that protect labor rights, prevent bias in AI-driven hiring and performance assessments, and create new opportunities for meaningful employment. A proactive, ethical approach to workforce transformation is essential to ensuring that technology benefits society as a whole rather than deepening economic inequalities.

Artificial Intelligence

Join CARE Mailing List

 
Technology experts recognize that Artificial Intelligence presents a complex array of ethical and policy challenges that professionals in all fields need to consider. One primary concern is the potential for bias in AI systems. Because these systems learn from data, any historical biases in that data—whether related to race, gender, or socioeconomic status—can be perpetuated or even amplified, affecting decisions in areas like hiring, law enforcement, or healthcare. Another issue is privacy, especially as AI systems often rely on vast amounts of personal data. The risk here is not only data breaches but also the possibility of invasive surveillance or misuse by bad actors. Furthermore, AI’s increasing capability to automate tasks raises questions about job displacement and economic inequality, as certain roles may become obsolete, while others require entirely new skill sets. From a policy perspective, clear regulations need to ensure AI is developed and deployed responsibly—protecting individuals' rights, preventing harm, and maintaining fairness. Policies must also be flexible enough to keep pace with the fast-evolving nature of the technology. Addressing these issues will require collaboration across industries and careful thought about how AI can serve the common good without exacerbating existing societal problems.

AI and Work (STEP)

Join CARE Mailing List

 

Generative AI and the future of work in America, McKinsey Global Institute Report, July 26, 2023

AI and Education (STEP)

Join CARE Mailing List

Technology experts emphasize the ethical and policy challenges of integrating technology into education, advocating for curricula that prepare students for a digital future while addressing issues of equity, access, and digital literacy. Concerns include the ethical use of AI-driven learning tools, data privacy in student tracking systems, and the digital divide that can exacerbate educational inequalities. Ensuring that students receive relevant, up-to-date instruction on technology—covering topics like cybersecurity, AI bias, and responsible AI use—is crucial for fostering informed and critical users of emerging technologies. Policymakers and educators need to develop policies that promote ethical technology use, ensure teacher training keeps pace with advancements, and incorporate diverse perspectives to create inclusive, future-ready learning environments.

Principles for the Development, Deployment, and Use of Generative AI Technologies, ACM Technology Policy Council (June 27, 2023)

Generative Artificial Intelligence (AI) is a broad term used to describe computing techniques and tools that can be used to create new content such as text, speech and audio, images and video, and computer code. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of generative AI systems, the speed of their evolution, broad application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.

AI Regulation (STEP)

Join CARE Mailing List

The ethical and policy challenges of AI regulation include the need for balanced oversight that fosters innovation while mitigating harm. Regulations should be adaptable to the rapid evolution of AI, avoiding overly rigid rules that stifle progress while ensuring accountability and transparency. Key concerns include bias in AI decision-making, data privacy, and the societal impact of automation. Policies should encourage interdisciplinary collaboration and integrate insights from ethics, law, and social sciences to create frameworks that are both practical and forward-looking. Regulations should avoid reactive, one-size-fits-all approaches and instead call for nuanced, evidence-based regulations that consider the diverse applications and risks of AI technologies.

CARE Race to Regulate the Internet program was held on May 8
(click image below to view on YouTube)

Should States or the Federal Government Set the Rules for Websites Content, Child Protection and Personal Data Control?

See headlines and details at the CARE Race to Regulate the Internet update.


Statement in Support of Mandatory Comprehensive Digital Accessibility Regulations

US Technology Policy Committee (May 31, 2024)

Content Provenance in the Age of AI (STEP)

Join CARE Mailing List

FACTS

INFORMATION

DATA

VALIDITY

Content Provenance refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document).

Technical specifications were released in 2022 by the Coalition for Content Provenance and Authenticity (C2PA Content Credentials).

Data provenance refers to a documented trail that accounts for the origin of a piece of data and where it has moved from to where it is presently. The purpose of data provenance is to tell developers the origin, changes to, and details supporting the confidence or validity of data. The concept of provenance guarantees that data creators are transparent about their work -- where it came from and the chain of information where data can be tracked as data is used and adapted for their own purposes.

In the context of technology, particularly in areas like data and supply chains, provenance refers to the tracking and verification of the origin and journey of data, goods, or materials. Standards are critical for ensuring transparency and accountability. However, maintaining accurate provenance can raise concerns about privacy and consent, particularly when personal or sensitive data is involved. For example, how do we balance the need to trace data usage or supply chain practices without infringing on individuals' rights or exposing confidential information? Additionally, there are ethical considerations regarding the accuracy and integrity of provenance records—misleading or incomplete records can perpetuate fraud or misrepresentation. From a policy standpoint, there is a need for clear standards around how provenance data is collected, shared, and verified, especially in global supply chains or when dealing with cross-border data. As provenance tracking becomes more common, policies must ensure that systems are robust, transparent, and respect privacy rights while also preventing misuse, such as counterfeiting or unethical sourcing. The goal is to create frameworks that encourage trust and accountability while addressing potential risks and gaps.

Cybersecurity (STEP)

Join CARE Mailing List

 

Statement On Mass Cybersecurity Incidents Likely to Recur, US Technology Policy Committee (August 11, 2024)

Cybersecurity ethics and policy issues are increasingly critical as digital threats become more sophisticated. One major concern is the balance between security and privacy—while cybersecurity measures are necessary to protect individuals and organizations from cyberattacks, they can sometimes infringe on privacy rights. For instance, the use of surveillance tools, data monitoring, or even mass data collection can raise ethical questions about the extent to which personal information is exposed. Another issue is the responsibility of organizations to safeguard sensitive data; the ethical implications of data breaches or negligence in securing systems can be profound, especially when it comes to financial, healthcare, or personal data. Robust, clear regulations need to set standards for cybersecurity practices while fostering innovation. Regulations must also be adaptable to the fast-changing nature of cyber threats, ensuring that organizations stay ahead of attackers. Ethical challenges also arise when considering how to defend against cyber threats while avoiding collateral damage such as unintended consequences from retaliatory cyber actions. Professionals across industries must understand these complexities to ensure cybersecurity measures are both effective and aligned with broader societal values.

Trustworthy AI

Join CARE Mailing List

 

ACM TechBrief on Trusted AI

The effectiveness of mechanisms and metrics implemented to promote trust of AI must be empirically evaluated to determine if they actually work. Distrust of AI implicates trustworthiness and calls for a deeper understanding of stakeholder perceptions, concerns, and fears associated with AI and its specific applications. Fostering public trust of AI will require that policymakers demonstrate how they are making industry accountable to the public and their legitimate concerns.

Explainable AI

Join CARE Mailing List

EXPLAINABLE AI
  • Definitions of key terms
  • Summarize areas of research
  • Comments from individuals and organizations
  • Understandings, issues and predictions