OpenAI introduces GPT-4o model, promising real-time conversation

OpenAI introduces GPT-4o model, promising real-time conversation

full version at cryptoslate

OpenAI announced a new model, GPT-4o, which will soon allow real-time conversation with an AI assistant.

In a May 13 demo, OpenAI members showed that the model can provide breathing feedback, tell a story, and help solve a math problem, among other applications.

Head of Frontiers Research Mark Chen noted that, although users could previously access Voice Mode, the new model allows interruptions, no longer has a multi-second delay, and can recognize and communicate in various emotional styles.

OpenAI CEO Sam Altman commented on the update in a separate blog post, calling it the “best computer interface I’ve ever used,” adding that it “feels like AI from the movies.”

He said:

“Getting to human-level response times and expressiveness turns out to be a big change.”

In addition to improved text, video, and visual capabilities, GPT-4o is faster and offers the same level of intelligence as GPT-4.

Full availability pending

Initially, GPT-4o will have limited features, but the model can already understand and discuss images “much better than any existing model.” In one example, OpenAI suggested that the model can examine a menu and provide translations, context, and recommendations.

Each of the company’s subscription models includes different access limits. Starting today, ChatGPT Free users can access the feature with usage limits. GPT-4o to ChatGPT Plus and Team users can also access GPT-4o with five times greater usage limits.

The company also plans to extend the feature to Enterprise users later with “even higher limits.”

OpenAI will introduce the updated “Voice Mode” in the near future. It plans to release an alpha in the coming weeks, with early access for Plus users.

Competitive AI sector

OpenAI’s updates follow other upgrades from competing companies.

In March, Anthropic released an upgrade to Claude that it called superior to OpenAI’s GPT-4. Meta, meanwhile, announced Llama 3 with an improved parameter count in April.

Other industry developments are yet to occur. Google is set to host its I/O conference on May 14, featuring AI within several keynotes. Apple is expected to announce iOS 18 in June, which is expected to include various new AI features.

The post OpenAI introduces GPT-4o model, promising real-time conversation appeared first on CryptoSlate.

Recent conversions

0.0032 ETH to USD 0.08888 ETH to NOK 42 QUID to NZD 0.0021 BTC to BTC 0.00003 BTC to ETH 0.888 ETH to EUR 12000 DOGE to NZD 0.08 SOL to CAD 3.25 SOL to BTC 3000 BHD to AUD 0.0012 BTC to GBP