HFES Joins Forces with NIST to Advance Artificial Intelligence Safety Measures

The Human Factors and Ergonomics Society (HFES) today announced it has been accepted into the U.S. Department of Commerce's National Institute of Standards and Technology's (NIST) Artificial Intelligence Safety Institute Consortium. This collaboration will pave the way for the development of a new measurement science that will promote the use of trustworthy and responsible Artificial Intelligence (AI) systems.

The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said U.S. Secretary of Commerce, Gina Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The Consortium, which does not evaluate commercial products or endorse any specific services, aims to address the challenges identified under the AI RMF roadmap and the October 30, 2023, Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This will be achieved through the collaborative establishment of a new measurement science that will identify proven, scalable and interoperable techniques and metrics for the development and responsible use of safe and trustworthy AI. The Institute's work will enable the development and deployment of safe and trustworthy AI systems through the operationalization of the AI Risk Management Framework (AI RMF).

As the leading organization in human factors and ergonomics, HFES is committed to promoting the safe and responsible use of AI that considers the effects of AI on human performance. By joining forces with NIST, HFES hopes to positively impact the development and deployment of AI systems that prioritize safety and trustworthiness.

According to Dr. Mica Endsley, HFES Government Relations Chair, "The human factors and ergonomics experts at HFES are excited to be a part of this groundbreaking initiative and look forward to contributing to the advancement of AI safety measures. With the rapid growth of AI technology, it is crucial to ensure its responsible use. This collaboration with NIST will play a significant role in achieving these goals.”

Source: https://www.hfes.org/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit
Azthena logo

AZoM.com powered by Azthena AI

Your AI Assistant finding answers from trusted AZoM content

Azthena logo with the word Azthena

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from AZoNetwork.com.

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI.
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy.
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.