During this year, the world of artificial intelligence (AI) exploded in what it can do. However, because of the few rules and regulations in this new wild west arena, not all its quick growth has been good.
The White House changed the AI landscape this week with a new Executive Order that aims to regulate the current untethered AI industry by mitigating risks while capitalizing on its potential. The 10 mandates in the order focus on future generations of AI models and not on the current models, such as ChatGPT.
10 Mandates In the AI Executive Order
The Executive Order addresses for tomorrow what the White House feels are the most important issues in AI today.
- The AI developers of powerful AI systems, like Google, Open AI and Microsoft, must share results of their safety tests with the federal government.
- Red Team testing must adhere to the high standards of the National Institute of Standards and Technology.
- Science and biology-related projects must meet the new standards for biosynthesis screening.
- Guidance is forthcoming from the Department of Commerce that will require AI generated content to be watermarked for authenticity.
- The AI Cyber Challenge will develop a high-level cybersecurity program to ensure the security of AI tools.
- The Executive Order is calling on lawmakers to ensure data privacy is protected when AI tools are used by individuals.
- Government agencies and third-party data brokers must use public datasets responsibly and they do not have free rein on the information available and what they can do with it.
- AI algorithms must not discriminate or form a bias against any group of individuals.
- To attract top AI talent, visa criteria for immigrants with AI expertise will be updated so these individuals can legally seek fellowships and job opportunities in the U.S.
- Best practices will be developed to protect AI workers from harms like surveillance, job replacement and other forms of discrimination.
The U.S. is not the only country seeing a need to regulate the AI industry. The European Union (EU) is working on its AI Act legislation and is expected to reach a deal by the end of the year. China has already unveiled new regulations that addresses the growth of their AI industry while retaining control over what information can be released through their AI systems.
The G-7, which include the United States, France, Germany, Italy, Japan, Britian and Canada, as well as the EU, also released its International Code of Conduct for Organizations Developing Advanced AI Systems, that calls on companies in these countries to conduct regular assessments to mitigate the risk their AI systems could inadvertently enable the creation of biological or nuclear weapons.
AI and what it can create … both good and bad, is of global concern and countries around the world are forging ahead with legislation trying to get ahead with what they feel will best protect their people, while at the same time ethically capitalizing on the awesome power of AI itself.