It’s official, OpenAI has started to train a new AI model


OpenAI, the company behind the popular chatbot ChatGPT, announced in a blog post this afternoon that the training of its next AI model has officially begun. The successor to GPT-4 is therefore officially underway, and the company took the opportunity to announce the creation of a new security committee.

For now, the company has not yet revealed the name of this model. Until now, it has followed a relatively simple nomenclature based on numbers; GPT-2 succeeded the original GPT before being replaced by GPT-3, then GPT-3.5 and GPT-4. But more recently, Sam Altman’s troops deviated a bit from this line with GPT-4o, sometimes called GPT-4-turbo, a new version based on the same fundamental model but optimized for speed.

The whole question is therefore whether this new version will be another variant of GPT-4 or a completely new foundation model, which would probably be called GPT-5 if necessary. The company does not explicitly say this, but the wording of the press release instead indicates the second option. Indeed, the company believes that this will be a big step forward towards the creation of a “general” artificial intelligence (AGI)that is to say able to carry out all the intellectual tasks that humans are capable of – or even to overcome.

The debate about security is on again

This is a longstanding claim of the company; The first poster of the company, published when it was created in 2015, already made reference to it. But with the democratization of machine learning that we are now witnessing, more and more observers are beginning to question the merits of this mission. Some people worry economic and social impact of such a tool, while others emphasize certain privacy and security risks.

In this context, is it really reasonable to develop an almost omniscient entity, with all that it implies for humanity? Where should we stop? These are legitimate questions worth asking. OpenAI claims to take these topics very seriously, but this position has not always convinced the public. And for good reason: the traditional company has not been particularly vocal about the measures put in place to avoid these pitfalls.

A new safety and security committee

But now, Sam Altman and others seem to want to make an effort at transparency by feeding a ” strong debates “. Indeed, the press release mainly talks about the creation of a new ” Safety and Security Committee “, which will be responsible for making recommendations to the board of directors to avoid possible abuses. His first task will be to take stock of the OpenAI procedures and safeguards erected so far. At the end of this first phase, which will last 90 days, the report will be reviewed by the company’s thought leaders, who will then publish “ an update of the adopted recommendations “.

But here it is: this new committee will be chaired by the CEO Sam Altman himself – not necessarily good news for the neutrality of the recommendations. And even if other members raise serious objections, the board — where Altman also serves — has no obligation to take them into account. Also, the press release suggests that only adopted recommendations will be published. The most annoying ones, on the other hand, are likely to fall by the wayside. Transparency, of course, but only up to a point.

You will understand: in practice, there is little chance that the committee will suffice to fill the void left by the dissolution of the famous ” superalignment team “at the beginning of the month. This division, first chaired by the co-founder and former scientific director Ilya Sutskever who recently packedhad the mission of ensuring that business models were ” controlled “, and “ aligned with human values “. By extension, the concerns about the methods and the opacity of OpenAI will not subside soon. It remains to be seen how it will evolve in contact with this new model, the release date has not been announced.

🟣 To not miss any news on the Journal du Geek, subscribe to Google News. And if you love us, we will a newsletter every morning.



Source link

Leave a Comment