Apple is working on a new AI system called ReaLM, which is attracting a lot of attention in the AI world. ReaLM will be able to understand the context in the best possible way. It could even be better than OpenAI’s GPT-4 in this area.
ReaLM’s main strength will lie in its ability to interpret the information contained on the screen. This will make it possible to interact more naturally with virtual assistants such as Siri. Imagine asking Siri to “call the pharmacy on Rainbow Road” after consulting a list of pharmacies. According to Apple researchers, unlike GPT-4, ReaLM understands the context and completes the task without further explanation.
ReaLM will achieve this by combining two capabilities: understanding ambiguous screen content and interpreting conversational contexts. This allows users to interact with the AI in a way that mimics a human conversation.
Another advantage of ReaLM is its ability to interpret images embedded in text. This could be an advantage for extracting information such as phone numbers or recipes from screenshots. In this case, ReaLM seems to have an advantage over GPT-4, which, according to Apple researchers, focuses mainly on natural images and not on screenshots.
The development of ReaLM signifies Apple’s potential entry into the highly competitive AI race with Microsoft and Google.
Although it is not officially known when or if ReaLM will be integrated into Siri or other Apple products, Apple CEO Tim Cook has hinted that he will give more details about his AI advances later this year. These news have generated a great deal of excitement, as ReaLM’s capabilities could considerably improve users’ experience of virtual assistants.
Contact us to be at the forefront of new trends