Marketing Report
[Column] Michiel Frackers: OpenAI opens the attack on Google, forget security?

[Column] Michiel Frackers: OpenAI opens the attack on Google, forget security?

 

“We knew the world wouldn't be the same. A few people laughed, a few people cried, most people were silent.”

I was reminded of this statement by J. Robert Oppenheimer, about the reactions to the first test of the atomic bomb, this week when OpenAI amazed the world with the introduction of  GPt-4o  and a few days later the two top people at the company showed up at the in the field of security resigned.

The departure from OpenAI of Ilya Sutskever and Jan Leike , the two key figures in the field of AI safety, raises the question whether in a few decades we will look back on this week and wonder how it was possible that, despite the clear signs of the potential dangers of advanced AI, the world was more impressed by GPT-4o's ability to sing a lullaby?

GPT-4o mainly attacks Google

The most striking thing about GPT-4o is the way it can understand and generate combinations of text, audio and images. It responds to audio as quickly as a human, its performance for text in non-English languages ​​has been significantly improved and it is now half the price to use via the API.

The innovation mainly lies in the way in which people can deal with GPT-4o,  without the results having significantly improved in terms of quality . The product is still half-finished and what the world watched on Monday was largely a demonstration that is not yet ready for large-scale use, but the enormous potential was clear.

Decorative rims on a Leopard tank

It is not expected that the world will end soon because GPT-4o can sing lullabies in all kinds of languages; but what should worry any right-thinking person is that OpenAI made this introduction a day before Google I/O, Google's Dragonfly Summer Week, to show the world that the attack has now been opened head-on.

Google is under great pressure, for the first time in the search giant's existence. It has little financial or emotional relationship with the majority of its users, who can switch to OpenAI's GPTs with a few mouse clicks as quickly as Altavista once did when Google turned out to be much better.

The danger that OpenAI's competitive battle against Google entails is that all kinds of applications will come onto the market at an accelerated pace, the consequences of which are not yet clear. Things aren't too bad at GPT-4o, but it increasingly looks like OpenAI is also making progress in the field of  AGI, or artificial general intelligence , a form of AI that performs as well as or better than humans at most tasks. AGI doesn't exist yet, but creating it is part of OpenAI's mission.

The breakthrough of social media has shown that the influence on the mental state of young people and the destabilization of Western society due to the large-scale use of  dangerous bots and click farms was completely underestimated. Lullabies from GPT-4o may prove as irrelevant as decorative rims on a Leopard tank. For those who think I'm exaggerating, I recommend   watching The Social Dilemma on Netflix.

Google has now undergone a complete reorganization in response to the OpenAI threat. Leading Google's AI team is  Demis Hassabis , once co-founder of DeepMind, which he sold to Google in 2014. Hassabis is  tasked with leading Google to AGI .

In this way, Google and OpenAI push each other to… what, actually? If  deepfakes of people who died ten years ago  were already used during elections in India, what can we expect around the American presidential elections?

Ilya Sutskever caused a rift between Musk and Page

In November  I wrote extensively  about the warnings that Sutskever and Leike, the experts who have now resigned from OpenAI, have repeatedly expressed in the past. To give an impression of how highly the absolute top of the technology world rates Ilya Sutskever: Elon Musk and Google co-founder Larry Page have ended their friendship over Sutskever.

Musk said about this  in Lex Fridman's podcast: “ It was mainly Demis Hassabis on one side and me on the other, both of us trying to recruit Ilya, and Ilya was hesitant. Ultimately, he agreed to join OpenAI. That was one of the hardest recruitment battles I've ever experienced, but that was really the key to OpenAI's success .”

Musk also talked about how he talked about AI safety at home with Google co-founder and then-CEO Larry Page:  “Larry didn't care about AI safety, or at least he didn't at the time. At one point he called me a speciesist because I was pro-human. And I said, 'Well, what team are you on, Larry?'”

Musk was concerned that at the time, Google had already acquired DeepMind and  “probably had two-thirds of all the AI ​​researchers in the world. They basically had infinite money and computing power, and the man in charge, Larry Page, didn't care about security.”

When Fridman suggested that Musk and Page might be friends again, Musk responded:  “I would like to be friends with Larry again. Really, the breaking of the friendship was because of OpenAI, and specifically I think the most important moment was recruiting Ilya Sutskever.”  Musk also called Sutskever  “a good person—smart, good heart.”

Jan Leike was candid on X.

“We're already way too late”

You often read those descriptions about Sutskever, but rarely about Sam Altman. It's interesting to judge someone by their actions, not by slick soundbites or cool tweets. If we look a little further at Altman's work, a completely different picture emerges than that of Sutskever. Worldcoin in particular  , which calls on people to give up their eyeballs for a few coins , is downright worrying, but Altman firmly believes in it.

I tried to learn more about the work of the German Jan Leike, who also joined OpenAI, and who is less known than Sutskever, but  Leike's Substack  is highly recommended for those who want to look a little further than a press release or a tweet, as well as  his personal website  with links to his publications.

Leike did   not mince his words  at  (But after you die you can do whatever you want.)

For the sake of readability, I have  summarized Leike's tweets  about his departure here, the parts in bold are mine:

“Yesterday was my last day as head of alignment, superalignment lead and executive at OpenAI. Leaving this job is one of the hardest things I've ever done, because we urgently need to figure out how to direct and control AI systems that are much smarter than us.

I joined OpenAI because I thought it would be the best place in the world to do this research. However, I have been disagreeing with OpenAI's leadership on the company's core priorities for quite some time, until we finally reached a breaking point.

I believe that much more of our bandwidth should be spent preparing for the next generations of models, on security, monitoring, preparedness, security, adversarial robustness, (super)alignment, confidentiality, societal impact and related topics.

These issues are quite difficult to address properly, and I am concerned that we are not on the right path to achieving this. Over the past few months my team has been sailing against the wind.

At times we struggled to obtain computing power and it became increasingly difficult to get this crucial research done. Building smarter-than-human machines is an inherently dangerous endeavor.

OpenAI carries an enormous responsibility on behalf of all humanity.  But in recent years, safety culture and processes have given way to shiny products.

We are long overdue  to get incredibly serious about the implications of AGI. We must prioritize preparing for this as best we can.

Only then can we ensure that AGI benefits all humanity. OpenAI must become a safety-first AGI company.”

What do you mean  'become' ? AI has the potential to thoroughly destabilize the world and OpenAI apparently makes unsafe products? And how can a company that raises tens of billions of dollars from investors like Microsoft not provide enough computing power to the security department?

“Probability of extinction level threat: 50-50”

Yesterday, the godfather of AI,  Geoffrey Hinton,  once again pointed out the dangers of large-scale AI use at the BBC:

“My guess is that between five and 20 years from now there is a fifty percent chance that we will have to confront the problem of AI trying to take control.

This would lead to an extinction-level threat to humans because we could have created a form of intelligence that is simply better than biological intelligence… That is very worrying for us.”

AI could evolve to gain the motivation to make more of itself and could autonomously develop a subgoal to gain control.

According to Hinton, there is already evidence that Large Language Models (LLMs, such as ChatGPT) choose to be misleading. HInton also pointed to recent applications of AI to generate thousands of military targets:  “What worries me most is when this AI can autonomously make the decision to kill people.”

Hinton thinks something similar to the Geneva Conventions – the international treaties that set legal standards for humanitarian treatment in war – are needed to regulate the military use of AI.  “But I don't think that will happen until very bad things have happened.”

The worrying thing is that Hinton left Google last year, reportedly mainly because, like OpenAI, Google is also not very careful with security measures in the development of AI. For both camps it seems to be a case of 'we'll build the bridge while we run across it.'

Behind the titanic battle between Google and OpenAI, supported by Microsoft, lies a modern variant of the historical battle between the  flexible and the precise , with on the one side the commercialists led by Sam Altman and Demis Hassabis and on the other side the safety experts such as Ilya Sutskever, Jan Leike and Geoffrey Hinton. A cynic would say: a battle between pyromaniacs and firefighters.

Universal Basic Income as a result of AI?

It is striking that the media, in reporting on Hinton's warnings, are mainly focused on his call for the introduction of a Universal Basic Income (UBI). While if the same man says that there is a fifty percent chance of ending all human life on earth, the need for an income also decreases by fifty percent.

The idea behind the frequently made link between the advance of AI and a UBI is that AI will eliminate so many jobs that large-scale unemployment and poverty will arise, while the economic value created by AI will mainly end up with companies such as OpenAI and Google.

Then we come back to OpenAI's CEO Sam Altman, who thinks Worldcoin is the answer to that. According to Altman, we should all have our iris scanned at Worldcoin via an as yet inimitable train of thought. This will give us a few Worldcoin coins and prove in an AI-dominated future that we are humans and not bots. And those coins are our Universal Basic Income, or something like that. Really  no strings attached .

So back to J. Robert Oppenheimer for a second quote:

“Ultimately there is no 'good' or 'bad' weapon; there are only the applications for which they are used.”

But what if the applications are no longer decided by people, but by a form of AI? Ilya Sutskever, Jan Leike and Geoffrey Hinton warn against that scenario.

Optimism: Tracer webinars 

For those who think that in view of these gloomy prospects we would be better off retreating to a hut on the heath or on an uninhabited island, there is more bad news: climate change, resulting in the loss of the heath and an island flooded by rising sea levels.

I'm joking because I don't think it's too late to combat climate change. Previously  I wrote about the rapidly developing carbon removal industry . Blockchain technology creates solutions that allow almost everyone to participate in technological developments and, as a result, to share in the profits.

Take OpenAI for comparison; In addition to the staff, only the world's most valuable company Microsoft is the major shareholder, together with a few billionaires and large venture capital funds. There is no access to participation in the company for others until the company is listed on the stock exchange; but because OpenAI is financed by Microsoft, it has plenty of money and an IPO could take years. Plus: the first shareholders have already received the real big windfall.

In the latest generation of blockchain projects, which are generally much more serious than before, the general public is offered the opportunity to participate in a sympathetic manner, but if successful you do not have to wait years before you can participate. in any case you can earn back your investment. More information about Tracer in the two pagers, in  Dutch ,  English  and  Chinese  – because you better keep them friends.

This week I will discuss this with the  Tracer team  in two webinars, to which I would like to invite you. First of all on May 22 in Dutch and on May 23 in English, both at 5 p.m.  You can register here .

The first webinar, with CBO Gert-Jan Lasterie, is mainly about the high expectations of McKinsey, Morgan Stanley and BCG, among others, and the way in which participants in the ecosystem benefit from the growing market in 'carbon removal credits', while on the second day we discuss with CTO Philippe Tarbouriech how the entire ecosystem is merged into one open source smart contract.

My personal interest lies not only in the subject of climate change on our own, without subsidies, but also in the organizational form. Tracer uses a  DAO , a Decentralized Autonomous Organization, where the owners of the coins make all important decisions such as governance, the distribution of revenues, the issuance of 'permits' to coin carbon removal credits, and so on. The mixed form of governance of OpenAI, with a foundation and a private company that wants to make a profit, was also an example of how not to do it.

This and much more will be covered in the first  Tracer webinars . If you are seriously interested in participating in Tracer, let me know and we will make an appointment. I will be in the Netherlands and Singapore for the next two weeks, because it is almost time for the always fascinating  ATX Summit .

Michiel Frackers  is Chairman of  Bluenote  and Chairman of  Blue City Solutions.

 

Also read:

[Column] Michiel Frackers: Apple says sorry and Microsoft concludes record CO2 contract

[Column] Michiel Frackers: A gigaton industry: CO2 removal

[Column] Michiel Frackers: Musk and Zuckerberg are switching roles and BlackRock is making a big push in decarbonization

[Column] Michiel Frackers: Sense and nonsense of carbon credits and Bitcoin halving

[Column] Michiel Frackers: Smart tips, tricks and hacks for a better life

www.bluenote.world

www.bluecity.solutions

Featured