Google’s next-generation AI model “Gemini” to launch this fall


Google’s next-generation multimodal AI model, Gemini, will launch this fall to compete with OpenAI’s GPT-4 and will be available to AI app developers, according to The Information, an anonymous person involved in Gemini’s development. It is reported as.
Sources say Gemini is a “large-scale collection of AI models,” and like OpenAI, Google is also using GPT-4 as a “model made up of multiple AI expert models with specific abilities.” This suggests the possibility of adopting an approach called “architecture”. It could also mean that Google wants to make Gemini available in a variety of sizes, which would likely be cost effective.
Gemini is said to be a multimodal model that can handle not only text but also images. Gemini is also trained on YouTube video transcripts, so it should be able to generate simple videos like RunwayML Gen-2 and Pika Labs. Furthermore, his coding ability has improved significantly.
Google plans to gradually integrate Gemini into products such as Bard’s chatbot, GoogleDocs, and Slides. Later this year, Gemini will be made available to external developers on Google Cloud.
Many Google employees involved
According to The Information, at least 20 executives are involved in developing the model. The Gemini development team, which consists of Google Brain and Google DeepMind, reportedly includes several hundred employees.
Google DeepMind was recently integrated and the company has yet to find the right balance, including its remote work policy and the technology used to train its models, The Information reports. DeepMind has reportedly abandoned a ChatGPT competitor codenamed “Goodall” and based on an unreleased model called “Chipmun” in favor of Gemini.
Gemini’s team is led by DeepMind founder Demis Hassabis, supported by two DeepMind executives, Oriol Vinyals and Koray Kavukcuoglu, and former Google Brain chief Jeff Dean. Google founder Sergey Brin is also involved in the development of Gemini, and is said to be helping train and evaluate the model.
Gemini’s training materials are closely monitored by Google’s legal department, and copyrighted books and other materials appear to be thoroughly excluded. According to The Information’s sources, Gemini was also inadvertently trained on “offensive” content, which led to a (partial) retraining of the model.
Gemini’s existence was officially announced in May. Previous rumors have suggested that the model will have at least 1 trillion parameters, and that tens of thousands of Google’s TPU AI chips will be used to train it.
Gemini CEO Demis Hassabis said in late June that Gemini will “combine some of the strengths of AlphaGo-type systems with the incredible language capabilities of large-scale models.” He also says , “There are some new innovations that are going to be pretty interesting . “