On May 4, the White House announced steps to help ensure the development of responsible artificial intelligence (AI). The steps include public-private partnerships, government investment, transparency, and emphasizing AI innovation which will serve the public good.

The announcement created seven new National AI Research Institutes, funded with an investment of $140 million, that brought the number of such institutes to 25.

Prior steps taken by the administration include, Blueprint for an AI Bill of Rights , AI Risk Management Framework and a roadmap for standing up a National AI Research Resource. In addition, “a joint statement was recently issued by the Federal Trade Commission, Equal Employment Opportunity Commission, Department of Justice, and the Consumer Financial Protection Bureau which highlighted responsible innovation. Innovation which falls within established laws, with special emphasis on data and dataset (and bias within), opacity and access, design and use.

Vice President Kamala Harris and administration officials met with the industry leaders in the realm of AI which resulted in “frank and constructive discussion” which focused on three areas:

  • the need for companies to be more transparent with policymakers, the public, and others about their AI systems;
  • the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and
  • the need to ensure AI systems are secure from malicious actors and attacks.

Responsible AI development requires the various systems to be reviewed and assessed (red teamed).

DEFCON 31 – AI Village

The administration’s desire for transparency includes a proactive step which includes the “Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31.” DEFCOn is the annual hacking conference held each year in Las Vegas in August. The AI Village first appeared at DEFCON in 2018.

Scale AI, whose platform will be used by the red team hackers at the conference claims their mission is “to accelerate the development of AI applications.” In a statement from Steve Cattel, the founder of AI Village, noted how companies have used specialized Red Teams to evaluate software, usually in private. He continued, “The diverse issues with these models will not be resolved until more people know how to red teams and assess them. Bug bounties, live hacking events, and other standard community engagements in security can be modified for machine learning model-based systems. These fill two needs with one deed, addressing the harms and growing the community of researchers that know how to help.”

The administrative hopes “This independent exercise will provide critical information to researchers and the public about the impacts of these models and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.”

Next Steps for AI

Office of Management and Budget is on the hook to issue policy guidance on the use of AI within the federal government. Speaking on background, and administrative official noted, this will “further our efforts to lead by example in mitigating AI risks and harnessing AI opportunities. The official also noted the risks are far reaching ranging from autonomous vehicles, cybersecurity risks, risks to civil rights, such as bias that’s embedded in housing or employment decisions; risks to privacy, such as enabling real-time surveillance; risks to trust in democracy, such as the risks from deep fake; and, of course, risks to jobs and the economy, thinking about job displacement from automation now coming into fields that we previously thought were immune.”

There is much to do to ensure responsible AI development, and it is clear we are at the beginning of the journey.

Related News

Christopher Burgess (@burgessct) is an author and speaker on the topic of security strategy. Christopher, served 30+ years within the Central Intelligence Agency. He lived and worked in South Asia, Southeast Asia, the Middle East, Central Europe, and Latin America. Upon his retirement, the CIA awarded him the Career Distinguished Intelligence Medal, the highest level of career recognition. Christopher co-authored the book, “Secrets Stolen, Fortunes Lost, Preventing Intellectual Property Theft and Economic Espionage in the 21st Century” (Syngress, March 2008). He is the founder of securelytravel.com