Google Unveils Gemini 2.0 – Slashdot
Google unveiled Gemini 2.0 yesterday, almost exactly one year after Google’s initial Gemini launch. The new release offers enhanced multimodal capabilities like native image and audio output, real-time tool use, and advanced reasoning to enable agentic experiences, such as acting as a universal assistant or research companion. VentureBeat reports: During a recent press conference, Tulsee Doshi, director of product management for Gemini, outlined the system’s enhanced capabilities while demonstrating real-time image generation and multilingual conversations. “Gemini 2.0 brings enhanced performance and new capabilities like native image and multilingual audio generation,” Doshi explained. “It also has native intelligent tool use, which means that it can directly access Google products like search or even execute code.”
The initial release centers on Gemini 2.0 Flash, an experimental version that Google claims operates at twice the speed of its predecessor while surpassing the capabilities of more powerful models. This represents a significant technical achievement, as previous speed improvements typically came at the cost of reduced functionality. Perhaps most significantly, Google introduced three prototype AI agents built on Gemini 2.0’s architecture that demonstrate the company’s vision for AI’s future. Project Astra, an updated universal AI assistant, showcased its ability to maintain complex conversations across multiple languages while accessing Google tools and maintaining contextual memory of previous interactions. […]
For developers and enterprise customers, Google introduced Project Mariner and Jules, two specialized AI agents designed to automate complex technical tasks. Project Mariner, demonstrated as a Chrome extension, achieved an impressive 83.5% success rate on the WebVoyager benchmark for real-world web tasks — a significant improvement over previous attempts at autonomous web navigation. Supporting these advances is Trillium, Google’s sixth-generation Tensor Processing Unit (TPU), which becomes generally available to cloud customers today. The custom AI accelerator represents a massive investment in computational infrastructure, with Google deploying over 100,000 Trillium chips in a single network fabric.
Source link