Skip to main content
If you are building AI systems with Google Agent Development Kit (ADK) and want to evaluate multi-agent conversations, handoffs, and tool usage, you can use the SDKs to make Openlayer part of your workflow. This integration guide shows how you can comprehensively trace and monitor your multi-agent systems powered by Gemini models.

Evaluating Google ADK Applications

You can set up Openlayer tests to evaluate your Google ADK applications in monitoring and development.

Monitoring

To use the monitoring mode, you must instrument your code to publish the requests your AI system receives to the Openlayer platform. To set it up, you must follow the steps in the code snippet below:
Python
# 1. Set the environment variables
import os

os.environ["OPENLAYER_API_KEY"] = "YOUR_OPENLAYER_API_KEY_HERE"
os.environ["OPENLAYER_INFERENCE_PIPELINE_ID"] = "YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE"

# 2. Import and enable Google ADK tracing BEFORE creating agents
from openlayer.lib import trace_google_adk

trace_google_adk()

# 3. Create your agents with tools and sub-agents
from google.adk.agents import Agent
from google.adk.runners import Runner

def get_weather(location: str) -> str:
    """Get weather for a location."""
    return f"Sunny and 72°F in {location}"

weather_agent = Agent(
    name="weather_agent",
    model="gemini-2.0-flash-exp",
    description="Provides weather information",
    instructions="You help users get weather information for any location.",
    tools=[get_weather]
)

main_agent = Agent(
    name="main_agent",
    model="gemini-2.0-flash-exp",
    instructions="You are a helpful assistant that can check the weather.",
    sub_agents=[weather_agent]
)

# 4. Run conversations with automatic tracing
async def run_conversation(user_input: str):
    runner = Runner(main_agent)

    async for event in runner.run_async(
        session_id="session-123",
        user_id="user-456",
        message=user_input
    ):
        if event.is_final_response():
            print(event.content)

# From now on, all agent conversations, handoffs, and tool calls
# are automatically traced by Openlayer
await run_conversation("What's the weather in San Francisco?")

See full Python example

Once the code is instrumented, all your Google ADK interactions are automatically published to Openlayer, including:
  • Agent execution with agent names, descriptions, and instructions
  • LLM calls to Gemini models with messages and configurations
  • Token usage including prompt tokens, completion tokens, and totals
  • Tool calls with function names, arguments, and results
  • Agent transfers and handoffs between sub-agents with proper hierarchy
  • Session context including user IDs, session IDs, and invocation tracking
  • Metadata such as latency and timestamps for all operations
If you navigate to the “Data” page of your Openlayer data source, you can see the complete traces for each multi-agent conversation.
The Google ADK integration automatically captures the full agent workflow, including sub-agent handoffs and tool usage. You can use this together with tracing to monitor complex multi-agent systems as part of larger AI workflows. Make sure to call trace_google_adk() before creating any agents.
After your AI system requests are continuously published and logged by Openlayer, you can create tests that run at a regular cadence on top of them. Refer to the Monitoring overview, for details on Openlayer’s monitoring mode, to the Publishing data guide, for more information on setting it up, or to the Tracing guide, to understand how to trace more complex systems.

Development

In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events. Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:
  1. either provide a way for Openlayer to run your AI system on your datasets, or
  2. before pushing, generate the model outputs yourself and push them alongside your artifacts.
For AI systems built with Google ADK, if you are not computing your system’s outputs yourself, you must provide your API credentials for Google’s Gemini models. To do so, navigate to “Workspace settings” -> “Environment variables,” and click on “Add secret” to add the required Google API credentials (such as GOOGLE_API_KEY or appropriate service account credentials). If you don’t add the required API credentials, you’ll encounter a “Missing API key” error when Openlayer tries to run your AI system to get its outputs.