| Welcome to Global Village Space

Tuesday, July 16, 2024

OpenAI sets up a safety team for training a new GPT model

OpenAI strides towards AGI with new AI model training, bolstering safety efforts through a dedicated committee amid internal challenges.

OpenAI has recently disclosed the initiation of training for its next-generation AI model, signaling a step forward towards achieving Artificial General Intelligence (AGI). Although the specific model, whether GPT-5 or not, remains unconfirmed, CEO Sam Altman estimates that AGI is still more than 5 years away.

Read more: Elon Musk vs WhatsApp: The latest controversy

The AI startup has established a new “Safety and Security Committee” responsible for overseeing risk management across its projects and operations. This development follows not only the announcement of the new AI model but also a tumultuous period marked by leaked internal documents revealing hostile company policies towards employees.

In AI terminology, a “frontier model” refers to a novel AI system designed to push boundaries beyond current capabilities. AGI represents a theoretical AI system capable of performing tasks akin to humans, even those it hasn’t been explicitly trained for, in contrast to narrow AI, which is tailored for specific tasks.

Simultaneously, the newly established Safety and Security Committee, led by OpenAI directors Bret Taylor serving as chair, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman, will provide guidance to the entire company’s board of directors on matters concerning AI safety.

Read more: Samsung Galaxy Z Flip 6 Battery Specs Revealed by FCC

In this context, the concept of “safety” extends beyond merely averting AI from spiraling out of control and exerting dominance globally. It encompasses a broader spectrum of “processes and safeguards,” as detailed by the company in a safety update on May 21. These encompass alignment research, safeguarding children, upholding election integrity, assessing societal impacts, and implementing security measures.