Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
To enable a particular tool for usage in returned responses, include the name of
the tool in the tools list when you initialize the model. The following
sections provide examples of how to use each of the built-in tools in your code.
Supported models
You can use the Live API with the following models:
* Reach out to your Google account team representative to request
access.
Function calling
Use function calling to create a description of a
function, then pass that description to the model in a request. The response
from the model includes the name of a function that matches the description and
the arguments to call it with.
All functions must be declared at the start of the session by sending tool
definitions as part of the LiveConnectConfig message.
To enable function calling, include function_declarations in the tools list:
You can use code execution with the Live API
to generate and execute Python code directly. To enable code execution for your
responses, include code_execution in the tools list:
Proactive Audio allows the model to respond only when
relevant. When enabled, the model generates text transcripts and audio responses
proactively, but only for queries directed to the device. Non-device-directed
queries are ignored.
To use Proactive Audio, configure the proactivity field in
the setup message and set proactive_audio to true:
Affective Dialog allows models using Live API
native audio to better understand and respond appropriately to users' emotional
expressions, leading to more nuanced conversations.
To enable Affective Dialog, set enable_affective_dialog to
true in the setup message:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Built-in tools for the Live API\n\nLive API-supported models come with the built-in ability to use the\nfollowing tools:\n\n- [Function calling](#function-calling)\n- [Code execution](#code-execution)\n- [Grounding with Google Search](#grounding-google-search)\n- [Grounding with Vertex AI RAG Engine (Preview)](#use-rag-with-live-api)\n\nTo enable a particular tool for usage in returned responses, include the name of\nthe tool in the `tools` list when you initialize the model. The following\nsections provide examples of how to use each of the built-in tools in your code.\n\nSupported models\n----------------\n\nYou can use the Live API with the following models:\n\n^\\*^ Reach out to your Google account team representative to request\naccess.\n\nFunction calling\n----------------\n\nUse [function calling](/vertex-ai/generative-ai/docs/multimodal/function-calling) to create a description of a\nfunction, then pass that description to the model in a request. The response\nfrom the model includes the name of a function that matches the description and\nthe arguments to call it with.\n\nAll functions must be declared at the start of the session by sending tool\ndefinitions as part of the `LiveConnectConfig` message.\n\nTo enable function calling, include `function_declarations` in the `tools` list: \n\n### Python\n\n```python\nimport asyncio\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client(\n vertexai=True,\n project=GOOGLE_CLOUD_PROJECT,\n location=GOOGLE_CLOUD_LOCATION,\n)\nmodel = \"gemini-live-2.5-flash\"\n\n# Simple function definitions\nturn_on_the_lights = {\"name\": \"turn_on_the_lights\"}\nturn_off_the_lights = {\"name\": \"turn_off_the_lights\"}\n\ntools = [{\"function_declarations\": [turn_on_the_lights, turn_off_the_lights]}]\nconfig = {\"response_modalities\": [\"TEXT\"], \"tools\": tools}\n\nasync def main():\n async with client.aio.live.connect(model=model, config=config) as session:\n prompt = \"Turn on the lights please\"\n await session.send_client_content(turns={\"parts\": [{\"text\": prompt}]})\n\n async for chunk in session.receive():\n if chunk.server_content:\n if chunk.text is not None:\n print(chunk.text)\n elif chunk.tool_call:\n function_responses = []\n for fc in tool_call.function_calls:\n function_response = types.FunctionResponse(\n name=fc.name,\n response={ \"result\": \"ok\" } # simple, hard-coded function response\n )\n function_responses.append(function_response)\n\n await session.send_tool_response(function_responses=function_responses)\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n \n```\n\n### Python\n\nCode execution\n--------------\n\nYou can use [code execution](/vertex-ai/generative-ai/docs/multimodal/code-execution) with the Live API\nto generate and execute Python code directly. To enable code execution for your\nresponses, include `code_execution` in the `tools` list: \n\n### Python\n\n```python\nimport asyncio\nfrom google import genai\nfrom google.genai import types\n\n\nclient = genai.Client(\n vertexai=True,\n project=GOOGLE_CLOUD_PROJECT,\n location=GOOGLE_CLOUD_LOCATION,\n)\nmodel = \"gemini-live-2.5-flash\"\n\ntools = [{'code_execution': {}}]\nconfig = {\"response_modalities\": [\"TEXT\"], \"tools\": tools}\n\nasync def main():\n async with client.aio.live.connect(model=model, config=config) as session:\n prompt = \"Compute the largest prime palindrome under 100000.\"\n await session.send_client_content(turns={\"parts\": [{\"text\": prompt}]})\n\n async for chunk in session.receive():\n if chunk.server_content:\n if chunk.text is not None:\n print(chunk.text)\n \n model_turn = chunk.server_content.model_turn\n if model_turn:\n for part in model_turn.parts:\n if part.executable_code is not None:\n print(part.executable_code.code)\n\n if part.code_execution_result is not None:\n print(part.code_execution_result.output)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n \n```\n\nGrounding with Google Search\n----------------------------\n\nYou can use [Grounding with Google Search](/vertex-ai/generative-ai/docs/grounding/grounding-with-google-search) with\nthe Live API by including `google_search` in the `tools` list: \n\n### Python\n\n```python\nimport asyncio\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client(\n vertexai=True,\n project=GOOGLE_CLOUD_PROJECT,\n location=GOOGLE_CLOUD_LOCATION,\n)\nmodel = \"gemini-live-2.5-flash\"\n\n\ntools = [{'google_search': {}}]\nconfig = {\"response_modalities\": [\"TEXT\"], \"tools\": tools}\n\nasync def main():\n async with client.aio.live.connect(model=model, config=config) as session:\n prompt = \"When did the last Brazil vs. Argentina soccer match happen?\"\n await session.send_client_content(turns={\"parts\": [{\"text\": prompt}]})\n\n async for chunk in session.receive():\n if chunk.server_content:\n if chunk.text is not None:\n print(chunk.text)\n\n # The model might generate and execute Python code to use Search\n model_turn = chunk.server_content.model_turn\n if model_turn:\n for part in model_turn.parts:\n if part.executable_code is not None:\n print(part.executable_code.code)\n\n if part.code_execution_result is not None:\n print(part.code_execution_result.output)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n \n```\n\nGrounding with Vertex AI RAG Engine (Preview)\n---------------------------------------------\n\nYou can use Vertex AI RAG Engine with the Live API for grounding,\nstoring, and retrieving contexts: \n\n### Python\n\n```python\nfrom google import genai\nfrom google.genai import types\nfrom google.genai.types import (Content, LiveConnectConfig, HttpOptions, Modality, Part)\nfrom IPython import display\n\nPROJECT_ID=YOUR_PROJECT_ID\nLOCATION=YOUR_LOCATION\nTEXT_INPUT=YOUR_TEXT_INPUT\nMODEL_NAME=\"gemini-live-2.5-flash\"\n\nclient = genai.Client(\n vertexai=True,\n project=PROJECT_ID,\n location=LOCATION,\n)\n\nrag_store=types.VertexRagStore(\n rag_resources=[\n types.VertexRagStoreRagResource(\n rag_corpus= # Use memory corpus if you want to store context.\n )\n ],\n # Set `store_context` to true to allow Live API sink context into your memory corpus.\n store_context=True\n)\n\nasync with client.aio.live.connect(\n model=MODEL_NAME,\n config=LiveConnectConfig(response_modalities=[Modality.TEXT],\n tools=[types.Tool(\n retrieval=types.Retrieval(\n vertex_rag_store=rag_store))]),\n) as session:\n text_input=TEXT_INPUT\n print(\"\u003e \", text_input, \"\\n\")\n await session.send_client_content(\n turns=Content(role=\"user\", parts=[Part(text=text_input)])\n )\n\n async for message in session.receive():\n if message.text:\n display.display(display.Markdown(message.text))\n continue\n```\n\nFor more information, see [Use Vertex AI RAG Engine in Gemini\nLive API](/vertex-ai/generative-ai/docs/rag-engine/use-rag-in-multimodal-live).\n\n(Public preview) Native audio\n-----------------------------\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\n[Gemini 2.5 Flash with Live API](/vertex-ai/generative-ai/docs/models/gemini/2-5-flash#live-api-native-audio)\nintroduces native audio capabilities, enhancing the standard Live API\nfeatures. Native audio provides richer and more natural voice interactions\nthrough [30 HD voices](/text-to-speech/docs/chirp3-hd#voice_options) in [24\nlanguages](/text-to-speech/docs/chirp3-hd#language_availability). It also\nincludes two new features exclusive to native audio: [Proactive Audio](#use-proactive-audio) and [Affective Dialog](#use-affective-dialog).\n| **Note:** `response_modalities=[\"TEXT\"]` is not supported for native audio.\n\n### Use Proactive Audio\n\n**Proactive Audio** allows the model to respond only when\nrelevant. When enabled, the model generates text transcripts and audio responses\nproactively, but *only* for queries directed to the device. Non-device-directed\nqueries are ignored.\n\nTo use Proactive Audio, configure the `proactivity` field in\nthe setup message and set `proactive_audio` to `true`: \n\n### Python\n\n```python\nconfig = LiveConnectConfig(\n response_modalities=[\"AUDIO\"],\n proactivity=ProactivityConfig(proactive_audio=True),\n)\n \n```\n\n### Use Affective Dialog\n\n| **Important:** Affective Dialog can produce unexpected results.\n\n**Affective Dialog** allows models using Live API\nnative audio to better understand and respond appropriately to users' emotional\nexpressions, leading to more nuanced conversations.\n\nTo enable Affective Dialog, set `enable_affective_dialog` to\n`true` in the setup message: \n\n### Python\n\n```python\nconfig = LiveConnectConfig(\n response_modalities=[\"AUDIO\"],\n enable_affective_dialog=True,\n)\n \n```\n\nMore information\n----------------\n\nFor more information on using the Live API, see:\n\n- [Live API overview](/vertex-ai/generative-ai/docs/live-api)\n- [Live API reference\n guide](/vertex-ai/generative-ai/docs/model-reference/multimodal-live)\n- [Interactive conversations](/vertex-ai/generative-ai/docs/live-api/streamed-conversations)"]]