Shafaq News / After coming to Bard and the Pixel 8 Pro last week, Gemini, Google’s recently announced flagship gen AI model family, is launching for Google Cloud customers using Vertex AI.
Gemini Pro, a lightweight version of a more capable Gemini model, Gemini Ultra, currently in private preview for a “select set” of customers, is now accessible in public preview in Vertex AI, Google’s fully managed AI dev platform, via the new Gemini Pro API. The API is free to use “within limits” for the time being (more on what that means later) and supports 38 languages and regions including Europe, as well as features like chat functionality and filtering.
“Gemini’s a state-of-the-art natively multimodal model that has sophisticated reasoning advanced coding skills,” Google Cloud CEO Thomas Kurian said during a press briefing on Tuesday. “[Now,] developers will be able to build their own applications against it.”
By default, the Gemini Pro API in Vertex accepts text as input and generates text as output, similar to generative text model APIs like Anthropic’s, AI21’s and Cohere’s. An additional endpoint, Gemini Pro Vision, also launching today in preview, can process text and imagery — including photos and video — and output text along the lines of OpenAI’s GPT-4 with Vision model.
Image processing addresses one of the major criticisms of Gemini following its unveiling last Wednesday — namely that the version of Gemini powering Bard, a fine-tuned Gemini Pro model, can’t accept images despite technically being “multimodal” (i.e. trained on a range of data including text, images, videos and audio). Questions linger around Gemini’s image analysis performance and skills, especially in light of a misleading product demo. But now, at least, users will be able to take the model and its image comprehension for a spin themselves.
Within Vertex AI, developers can customize Gemini Pro to specific contexts and use cases leveraging the same fine-tuning tools available for other Vertex-hosted models, like Google’s PaLM 2. Gemini Pro can also be connected to external APIs to perform particular actions or “grounded” to improve the accuracy and relevance of the model’s responses, either with third-party data from an app or database or with data from the web and Google Search.
(Techcrunch)