Generative Artificial Intelligence (AI) is a broad term used to describe computing techniques and tools that can be used to create new content such as text, speech and audio, images and video, and computer code. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of generative AI systems, the speed of their evolution, broad application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.
This form needs Javascript to display, which your browser doesn't support. Sign up here instead
FACTS
INFORMATION
DATA
VALIDITY
Content Provenance refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document).
Technical specifications were released in 2022 by the Coalition for Content Provenance and Authenticity (C2PA Content Credentials).
Data provenance refers to a documented trail that accounts for the origin of a piece of data and where it has moved from to where it is presently. The purpose of data provenance is to tell developers the origin, changes to, and details supporting the confidence or validity of data. The concept of provenance guarantees that data creators are transparent about their work -- where it came from and the chain of information where data can be tracked as data is used and adapted for their own purposes.
The effectiveness of mechanisms and metrics implemented to promote trust of AI must be empirically evaluated to determine if they actually work. Distrust of AI implicates trustworthiness and calls for a deeper understanding of stakeholder perceptions, concerns, and fears associated with AI and its specific applications. Fostering public trust of AI will require that policymakers demonstrate how they are making industry accountable to the public and their legitimate concerns.