New version of ChatGPT: OpenAI gives vision and voice to its conversational bot

Thanks to a new model, GPT-4o, ChatGPT will be able to understand text, sound and images, and respond to writing, by voice or by generating images.

Clearly, artificial intelligence never ceases to impress. The new version of ChatGPT, ChatGPT 4o, presented on Monday, May 13, is now able to imitate the behavior of a person.

The app is really amazing. No need to type anything anymore. You can simply chat with the app as you would with a human. She answers quickly. There is no longer that little pause of a second or two between the question and the answer. The exchange is much more interactive. Even the voice is more natural. Another new feature: you can now interrupt him and ask him anything. For example, to change the tone of the story he tells.

The conversational bot can now adapt to our emotions. If we are, for example, angry, in a hurry or tired. This new feature will definitely give call centers ideas. So don’t be surprised if tomorrow robots try to calm us down when we call customer service. It’s not just voice, the app also knows how to manage vision. In a demonstration, he was asked, for example, “help me solve the equation I’m writing. But without giving me the answer.” All you have to do is record what you write and ChatGPT reacts in real time. He corrects you when you make a mistake, just like if you had a teacher next to you. In another example, a visually impaired person points his phone at the street and asks “Notify me when a free taxi arrives”. And it works! It is the phone camera that gives eyes to the application.

This application should be available soon. The Mac version is already available. Those on mobile and Windows will arrive in the coming weeks. She is on her way to becoming a kind of ultimate assistant, as we have imagined them so far in science fiction movies.

Leave a Comment