To guarantee privacy and transparency, AI accountability measures sought publicly

Amid the excitement surrounding generative AI and chatbots, the US government is cautiously moving towards setting regulations for AI tools.

During a press conference at the University of Pittsburgh, Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), stated that accountability mechanisms for AI can enhance trustworthiness similar to how financial audits create confidence in financial statements. The US commerce department has requested public comment to advise policymakers on how to approach AI and create accountability measures.

Davidson emphasized that the NTIA is requesting input from various sources, including industry groups, researchers, and privacy and digital rights organizations, to develop audits and assessments for AI tools developed by the private sector. Additionally, the NTIA aims to establish parameters that enable the government to verify whether AI systems operate as advertised, are safe and effective, avoid discriminatory outcomes or unacceptable bias, avoid spreading misinformation, and protect individual privacy. He stressed the need for prompt action due to the rapidly evolving nature of AI technologies, stating that the urgency feels much greater than with previous technologies.

The Biden administration has already released a voluntary “bill of rights” that outlines five key principles for companies to follow when developing AI systems, such as ensuring data privacy, guarding against algorithmic bias, and providing transparency about the usage of automated systems. The National Institute of Standards and Technology has also developed a voluntary framework for AI risk management that companies can use to minimize harm to the public. Davidson further noted that several federal agencies are exploring how existing regulations can be applied to AI.

While European countries have been quick to implement national regulations in response to rapidly advancing technologies, the federal government in the US has been historically slow to do so. This has allowed tech companies to collect and share user data with relatively few federal restrictions, which has facilitated the growth of data brokers that buy and sell user data. Consequently, consumers find it challenging to protect their private information from third parties or law enforcement agencies.