GPT-4o first reactions: ‘essentially AGI’

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.


It’s only been a few hours since OpenAI took the wrapper off its newest AI large language model (LLM) known as GPT-4o (for Omni), but already the initial reactions to the event and the technology are rolling in.

It’s safe to say that at this early stage, the reaction is mixed. While some came away from OpenAI’s short (26 min-long) demo presentation wanting more, the company has since released a plethora of video demos and more information about the new foundation model — which it says is faster than its prior leading GPT-4 model, more affordable for third-party developers, and, perhaps most important of all, more emotional: better at detecting and mimicking human expressions, principally through audio.

It’s also free to use through ChatGPT for all users, even non-subscribers — though paying subscribers are getting ahold of it first (the update is rolling out over the coming weeks). It’s also only starting with text and vision capabilities, with audio and video coming in the next few weeks.

GPT-4o has been trained from the ground up to treat text, audio, and visual data equally and transform it all into tokens instead of relying on turning everything into text as before, allowing for the speed increase and cost decrease.

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

Here’s what the web is saying so far about it:

“OpenAI is eating Character AI’s lunch, with almost 100% overlap in form factor and huge distribution channels,” wrote Nvidia senior research manager and AI influencer Jim Fan on X. “It’s a pivot towards more emotional AI with strong personality, which OpenAI seemed to actively suppress in the past.”

“GPT-4o isn’t the big leap. This is,” stated Pennsylvania University Wharton School of Business professor and AI influencer Ethan Mollick.

AI influencer and startup advisor Allie K. Miller was excited about the new desktop ChatGPT app for macOS (later Windows) which runs on GPT-4o. As she wrote on X:

“HOLY MOLY THIS IS THE WINNING FEATURE. It’s basically a coworker on screen share with you 24/7, with no fatigue. I can imagine people working for hours straight with this on.”

AI developer Benjamin De Kraker wrote that he believed it was essentially artificial general intelligence (AGI), an AI that outperforms most humans and most economically valuable tasks, which was OpenAI’s entire quest and raison d’être from the start.

“Alright I’m gonna say it… This is essentially AGI. This will be seen as magic to masses. What else do you call it when a virtual “person” can listen, talk, see, and reason almost indistinguishably from an average human?”

Developer Siqi Chen was similarly impressed, citing GPT-4o’s newfound capability to render 3D objects from text prompts, writing on X that “this will prove to be in retrospect by far the most underrated openai event ever.”

On the flip side, journalist and author James Vincent stated that while the marketing for GPT-4o as a voice assistant was “canny,” it was ultimately “leaning into the masquerade of intelligence” as “voice…doesn’t necessarily indicate leaps forward in capability.”

Similarly, Gartner VP Chirag Dekate of the reputable market research and consulting firm’s quantum technologies, AI infrastructures and supercomputing department, told VentureBeat in a phone call that he found the GPT-4o unveiling event and tech demos “a bit underwhelming, because it reminded me of the Gemini demos that I’ve already seen almost three months ago from Google.”

He stated he believed that there was a growing “capability gap” emerging between OpenAI and other more longstanding technology companies such as Google, Meta, and even OpenAI ally Microsoft which is also training its own LLMs and foundation models, as those companies have more raw data to train new models on and more distribution channels to push them out, as well as cloud infrastructure and hardware (the Tensor Processing Unit or TPU in the case of Google) to optimize AI training and inferences.

“Open AI will struggle to activate the same sort of virtuous cycles,” with its AI products, Dekate told VentureBeat.

Self-described luddite (or anti-technology) influencer “Artisanal Holdout” posted on X the most scathing response I saw (unsurprisingly):

“Yikes—OpenAI balked on GPT-5 and instead released GPT-4o over a year after the initial launch of GPT-4. Guess they aren’t confident enough in their tiny baby steps of development. How embarrassing for both OpenAI and AI bros alike.”

Meanwhile, Greg Isenberg, CEO of holding company Late Checkout, wrote on X the opposite opinion that “The pace of change is unbelievable.”

AI educator Min Choi also applauded the release, saying it would “completely change the AI assistant game.”

Out for less than a day so far, and with many capabilities still to reach the public, GPT-4o is still a very young product. But given the already impassioned responses, it’s clear OpenAI has already struck a nerve.

VentureBeat has also received access via my personal account and will be testing the new model in the coming days. Stay tuned for our reaction once we have had more time to try it.