OpenAI promises to provide the U.S. government with early access to its artificial intelligence model

Achaoui Rachid
0

Sam Altman disclosed good news on Twitter on the weekend, declaring that OpenAI shall release the early forms of its new AI model to the American Institute for AI Safety. This decision demonstrates OpenAI’s dedication to the improvement of the safety of AI, specifically through partnering with other related organizations.

OpenAI promises to provide the U.S. government with early access to its artificial intelligence model

OpenAI Partners with AI Safety Institute for Early Model Access


OpenAI is a nonprofit research company and it will be using the American Institute for AI Safety as the key reviewers of the new model. Thus, both organizations have the same goal: to advance the actual science of AI assessments in order to guarantee that the further advancements of the AI systems will take into consideration the safety mechanisms.


The American Institute for AI Safety was sanctioned in NIST officially back in this year January. This can be seen as the first time there are more formal attempts at establishing how to and seeking to improve the safety of AI.


While NIST officially announced the Institute for AI Safety in 2024, it had in fact been unveiled for the first time by Vice President Kamala Harris at the AI Safety Summit in the United Kingdom in the year preceding, 2023. This shows that there continues to be engagement by government and organisations in the safety of AI.


This decision of OpenAI to grant early access to the new model in development is another indication of the increasing focus now placed in the tech industry on the general use of AI and its dissemination. It also draws attention to the aspects of security of the developing AI technologies.


Balancing Innovation and Safety: The OpenAI Dilemma


Namely, as described in the NIST description of the consortium, it aims for developing policy recommendations in AI based on science and promoting the security of AI worldwide by setting strict standards. Thus, this initiative aims at creating a safe and trusted AI environment through easily understandable and empirically verified rules.


Last year, both open AI and DeepMind pledged to give their AI models to the UK government. This partnership was supposed to promote the growth of the best practices and clear rules for AI systems application.


However, studying the fresh news in TechCrunch, I can notice increasing the doubts to the safety mission of OpenAI. Sceptics believe that the desire to develop ever more complex algorithms could lead to a further weakening of safeguards in the firm.


There are some rumors that OpenAI board has discussed the dismissal of Sam Altman, but they quickly reverted that decision because of safety and security concerns. It later explained that it was not Altman’s safety that was at risk but it had let Altman go because of a communication breakdown.


This raises concerns that are being debated in the AI community of the world concerning the security implication of AI advancement. The idea of safety never develops in tandem with the development of AI technologies and this is without doubt one of the issues.


OpenAI's Safety Shake-Up: From Superalignment to Self-Regulation


OpenAI dissolved its Superalignment team in May of 2024 ; the team had focused on human safety given generative AI development. This decision occurred just a few months after the leading co-founder, Ilya Sutskever, and the team’s leader, left the company.


Another official of the Superalignment team, Jan Leike, also quit the company. He took to Twitter to express his fears stating that OpenAI which he supported had neglected safety over novelty.


The core of Leike’s criticism was to bring out a worrying trend among the observers of the industry about OpenAI that it was shifting less focus on safety now. He said that the company’s main goals overshadowed the importance of safeguarding AI practices.


Then end of May, OpenAI announced a new safety team in a blog post with board members such as Sam Altman. Some saw this as a way to keep oversight in case Superalignment team would not do that same job again.


This new safety group created to develop new measures has led to the following discussions as to the efficiency of self-regulation in AI. Some of the critics’ concern is that for lack of safety team, the company might find it challenging to ensure that innovation and safe use of AI are harmonized.

Tags

Post a Comment

0Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!