Gryphon Scientific Inaugural Member in U.S. Artificial Intelligence Safety Institute Consortium (AISIC)

Gryphon Scientific is an inaugural member in the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) groundbreaking U.S. Artificial Intelligence Safety Institute Consortium (AISIC) created to champion the development of secure and reliable artificial intelligence (AI). NIST’s establishment of AISIC reflects its commitment to empowering stakeholders, mitigating risks, and fostering responsible AI development through measurement science.

AISIC unites more than 200 private companies, academic research teams, non-profit organizations, and U.S. Government agencies committed to advancing research and development for safe and trustworthy AI systems.  These entities represent the nation’s largest companies and its innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in AI’s use today. The consortium also includes state and local governments, as well as non-profits. The consortium will also work with organizations from like-minded nations that have a key role to play in setting interoperable and effective safety around the world.

Gryphon Scientific will be one of more than 200 leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute.

Artificial Intelligence (AI) is a diverse and complex field that will increasingly integrate into everyday life. The promise of AI is nearly unlimited, but comes with significant risks. It is important to develop practices and policies to ensure AI products are safe and trustworthy. Given the breadth of AI’s potential use and the importance of AI Safety, NIST’s new U.S. Artificial Intelligence Safety Institute (USAISI) and related consortium (AISIC) has a critical mandate to fulfill: equip and empower U.S. AI practitioners with the tools to responsibly develop safe AI.


Gryphon Scientific is grateful to be included as a founding members in AISIC, and look forward to collaborating with NIST and many other stakeholders in the field of AI Safety in the coming years through the important work of the consortium.”

-Dr. Margaret Rush, Chief Scientific Officer, Gryphon Scientific

Gryphon Scientific currently works with the developers of leading (and niche) Large Language Models (LLMs) to reduce the risks that these models can be misused to cause harm with biology. Our work included red-teaming, the development of evaluations for ongoing testing, running controlled experiments and the development of biosecurity policies for LLMs. A summary of some of our recent work was submitted as part of the congressional hearing for the Ninth Bipartisan Senate Forums on Artificial Intelligence. Gryphon Scientific looks forward to continuing collaboration with industry, academics and government to bring expertise to AISIC.