AI Accountability

Trustworthiness is not inherent to artificial intelligence (AI) systems and tools. Designers and deployers of AI must demonstrate that their products are safe and effective—and therefore merit the public’s trust—through iterative accountability mechanisms that span the full development and deployment lifecycle and address risks related to both highly specialized and more general purpose AI systems. Sociotechnical AI accountability mechanisms based on evaluation, access, and disclosure that can begin to build justified public trust in AI as an essential predicate to adequately and effectively “aligning” these technological systems and tools with democratic and human values.

Recommendations to the US Department of Commerce (NTIA) on Policy for AI Accountability, June 2023