At Google’s annual I/O event, CEO Sundar Pichai said that Google is setting up Gemini AI to grow its AI game and turn its assistants into agents. He shared the new project capabilities of the tech giant’s AI model, Gemini. The conference showed how chatbots work with AI, new scanning capabilities, and various AI updates for Androids. During the event, the conference mainly focused on the latest AI and Gemini model upgrades.
Sundar Pichai noted that presenters mention AI names several times, and for their new astra AI assistant, some are rolling out now, and some will come gradually. In this Google I/O announcement, many projects like Google Project Astra, Gemini, VEO video generator AI, photo search, etc, would run out in the U.S. Pichai said after the Gemini introduction, it has been used by 1.5 million developers, and Google incorporated it into 2 billion users products. The announcement at this event includes multitudes of AI upgrades, including Google search with AI, image, VO generator, etc.
In its Google I/O 2024 announcements, Google focuses on introducing new AI models by addressing its focus on Gemini. The major goal is to make artificial intelligence more powerful so everyone can benefit from its capabilities.
Let’s look into the AI announcements made by Google on Tuesday’s annual I/O event.
Read Also: US Government Presenting their Closing Arguments against the Google Antitrust Case
Gemini Updates
Google focuses on upgrading its Gemini 1.5 pro model with more advanced standards that help and expand its chat GPT plus versions with 2 million context windows tokens. You can easily access and download these files directly from Google drivers. Hence, Gemini 1.5 Flash is a small model, but it’s suitable for those who want access to high frequency. Those who are using Gemini Advanced can easily use Gemini 1.5 Pro. The CEO of Google, Pichai, said we are continuously trying to make efforts to make progress in accessing our infinite context.
The developer said, “Google, we want AI models that are easy to access for us, cheap in price, and more effective. Instead, Gemini 1.5 Pro and Google introduced their Flash. Demas Hassabis, Google AI research CEO, said, ”Flash Pro has long context capabilities but is specially designed to provide excellent speed.” It can be accessed through AI studios and vertex AI.
New Astra AI Assistant project
Pichai said,” Google is planning for AI assistants and agents that help provide better reasoning and planning. Hassabis said we are trying to make a multimodel Gemini because we want to make it helpful for completing everyday tasks. This next-generation AI assistant provides useful tips and tricks through using smartphones, attaching cameras on laptops and PCs, and asking Gemini for help accessing information about the image or on-screen items. The Google project assistant can tell us about what we do and understand the context in which we search for answers.
Pichai said it’s the beginning, but we are trying to mock up with these models. This year, Google is trying to bring understanding to astra AI assistant projects, so Gemini Live helps people understand the video and the assistant quest.
AI Search Evolution
Google is also trying to improve its search through these AI models and optimize it for better user search. The search giant will also provide quick answers on the search for any particular subject or topic with an AI overview. This was a great feature, along with the Google SGE. The company also introduced AI SERPs to help them categorize under AI unique headlines. This upgrade is accessible in the US today, and the AI search engine pages will be introduced first for searching for recipes and then introduce other services like books, accessories, restaurants, etc.