OVERVIEW
The STEP Collaborative, operating under the GMU Center for Assured Research and Engineering (CARE), strives to be the leading resource for research-based, unbiased information for policymakers and citizens in the Commonwealth of Virginia.
The STEP Collaborative collects, analyzes, curates, and communicates knowledge on important developments in science and technology that impact citizens and policymakers making decisions about the future of individuals and society. STEP collaborators provide objective and unbiased information on new technologies and applications to aid decision making about legislation and regulation.
In 2025, artificial intelligence moved from being something “on the horizon” to becoming a part of everyday life. From workplaces and classrooms to elections and critical infrastructure, AI systems, especially generative and increasingly autonomous tools, quickly transitioned from testing to real-world application. As a result, debates about AI ethics and policy shifted from abstract principles to urgent, practical questions about risk, responsibility, and trust.
Throughout the year, public attention and policy discussions focused on several major issues:
- AI regulations are starting to take effect as governments begin enforcing risk-based rules instead of relying on voluntary guidelines.
- Stronger attention to AI safety through new evaluation methods, audits, and oversight standards.
- Intense debate over generative AI, including concerns about consent, data rights, environmental impact, and misuse
- The emergence of autonomous and “agentic” AI, capable of acting with minimal or no direct human control
- AI-assisted coding and software development, like vibe coding, raise questions about accountability, reliability, and security.
- Rising dangers of misinformation and deepfakes, especially during election periods.
- Legal disputes over training data, copyrights, and creator compensation
- Rising cybersecurity threats and systemic risk as society becomes more reliant on AI-driven systems.
- Rapid shifts in the workforce as AI transforms jobs, skills, and productivity.
- Ongoing issues with bias and discrimination, especially in critical areas like hiring, credit, and public services.
- Growing concern over AI’s environmental impact, including large-scale energy and resource consumption.
As AI systems become more powerful and autonomous, 2026 is likely to bring an even sharper focus on where and whether these technologies should be used at all. Key issues expected to dominate the conversation include:
- Who is responsible when autonomous AI causes harm, and how should liability be assigned?
- Establishing clear standards for authenticity and verification to help people trust what they see and hear online.
- Balancing global cooperation with regional regulation, as countries adopt different approaches.
- Experiencing deeper workforce disruption, alongside increasing demand for AI literacy and reskilling.
- Imposing environmental limits on AI growth, as energy-intensive systems strain resources.
- Strengthening duties for platforms and intermediaries, including monitoring, auditing, and rapid response to harms.
- Developing new legal strategies for synthetic identities, likeness rights, and consent.
Increasing scrutiny of AI standards raises questions about whether they effectively reduce harmful behavior or merely help organizations meet compliance requirements. AI capabilities are advancing faster than institutions, labor markets, and governance systems can adapt. The main ethical question for 2026 is no longer whether advanced AI can be used safely in theory, but whether organizations will establish clear deployment limits before social, economic, or democratic harms compel those decisions. Addressing this challenge will require collaboration among research, policy, industry, and civil society, along with ongoing investment in public understanding, AI literacy, and ethical oversight.
For more information, see:
Medsker, L. R. (2026). Top AI ethics and policy issues of 2025: What to expect in 2026. AI Matters, ACM SIGAI.
FOCUS IN VIRGINIA
RESEARCH
- Analyze and curate science and technology R&D
- Verify and validate emerging innovations
- Fact check reports and claims of product developers
- Apply best practices from policy research
EDUCATION
- Develop learning materials on new technologies, ethical issues and frameworks, and policymaking.
- Collaborate with schools and government agencies to adapt learning materials to their environments.
OUTREACH
- Provide formal and informal learning opportunities to individuals, community organizations, educators, legislators and their staffs
- Participate in conferences, webinars and briefings
- Offer consultations with organizations adapting policy and ethics to their operations
FEATURED ISSUES
-
Automation Technology
-
Automated and Agentic Systems
-
Quantum Computing
- Human-centered Technology
- Safe Technology and Systems
- Trustworthy AI
- Explainable AI
- Privacy
-
Automated Weapons Systems
-
Automated Vehicles
- Misinformation/ Disinformation
-
AI and Data Governance
-
AI Environmental Impact
-
Generative AI Use and Misuse
-
Technology and Accessibility
-
Technology Impact on Youth and Children
LEADERSHIP
COLLABORATORS
Dr. Jean-Pierre Auffret
Director of CARE, co-director of CRC, and director of Research Partnerships, George Mason University
Farhana Faruqe
Assistant Professor, University of Virginia (UVA) School of Data Science
Anthony J Rhem
CEO/Principal Consultant, A.J. Rhem & Associates