News

Google Opens Up Gemini 2.0, Touting Multimodal Capabilities

Google opened up access to Gemini 2.0, a significant update to its flagship AI, targeting enterprise users and developers with enhanced multimodal capabilities and improved performance.

The new tech was announced last December as an experiment that was available on Vertex AI, and today's announcement notes Gemini 2.0 is available on that cloud AI service and elsewhere (see "Google's Vertex AI Platform Gets Brand-New Gemini Pro LLM Tech").

"Today, we're making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI," Google said in a Feb. 5 blog post. "Developers can now build production applications with 2.0 Flash."

Vertex AI is Google Cloud's unified machine learning platform, designed to streamline the entire machine learning lifecycle. It helps developers and data scientists build, deploy, and scale AI models more efficiently, from experimentation to production.

Google's Vertex AI site lists new and enhanced features for Gemini 2.0 Flash, with an emphasis on multimodal capabilities:

  • Multimodal Live API: This new API enables low-latency bidirectional voice and video interactions with Gemini.
  • Quality: Enhanced performance across most quality benchmarks than Gemini 1.5 Pro.
  • Improved agentic capabilities: 2.0 Flash delivers improvements to multimodal understanding, coding, complex instruction following, and function calling. These improvements work together to support better agentic experiences.
  • New modalities: 2.0 Flash introduces built-in image generation and controllable text-to-speech capabilities, enabling image editing, localized artwork creation, and expressive storytelling.

Some Gemini 2.0 Flash features aren't available or are in preview stage on Vertex AI, however, with this graphic illustrating what's what:

Gemini 2.0 Flash Features
[Click on image for larger view.] Gemini 2.0 Flash Features (source: Google).

Another graphic in a different post rounds up everything across models:

Model Features Comparison
[Click on image for larger view.] Model Features Comparison (source: Google).

As noted, along with the Vertex AI cloud service, the new tech is also available via API to users of Google AI Studio, a browser-based development environment specifically designed for building and experimenting with generative AI models.

The new model also pops up when you access the online Gemini app. Users who hop online to try it out are advised it defaults to a concise style that Google said makes it easier to use and reduces cost, though it can also be prompted to use a more verbose style that produces better results in chat-oriented use cases. In testing out that style, the app, when asked about the most important thing to note about the update, said: "For IT pros and developers, the single most important thing to note about the Gemini 2.0 update is its enhanced multimodal capabilities, enabling seamless integration and understanding of information across text, images, audio, and video."

Availability of the new LLM on Vertex AI, Google AI Studio and the online app was just part of the news in Google's post, which also announced:

Gemini 2.0 Flash-Lite: This is a new model in public preview, focusing on cost efficiency. Google said it offers:
  • Better quality than 1.5 Flash: While maintaining the same speed and cost.
  • Multimodal input: Can understand and process information from images and text.
  • 1 million token context window: Can handle large amounts of information in a single interaction.
Gemini 2.0 Pro Experimental: This is an experimental version of the Pro model, geared towards complex tasks and coding. Google said it features:
  • Strongest coding performance: Better than any previous Gemini model.
  • Improved world knowledge and reasoning: Can handle complex prompts and understand nuances in language.
  • 2 million token long context window: Can analyze and understand vast amounts of information.

Here's the 2.0 Flash and Flash-Lite pricing, which Google claimed can be lower than Gemini 1.5 Flash with mixed-context workloads, despite the performance improvements that both deliver.

Gemini API Pricing
[Click on image for larger view.] Gemini API Pricing (source: Google).

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube

Upcoming Training Events