Stay organized with collections
Save and categorize content based on your preferences.
This page explains how to export Connect Agent metrics to
Cloud Monitoring from Google Distributed Cloud, GKE on AWS, or any other
registered Kubernetes cluster.
Overview
In a Google Distributed Cloud or GKE on AWS cluster, Prometheus collects metrics and stores
them locally within the cluster. Registering a cluster outside Google Cloud to a fleet
creates a Deployment called Connect Agent in the cluster. Prometheus collects
useful metrics from Connect Agent, like errors connecting to Google and the
number of open connections. To make these metrics available to
Cloud Monitoring, you must:
Expose the Connect Agent using a Service.
Deploy prometheus-to-sd,
a simple component that scrapes Prometheus metrics and exports them to
Cloud Monitoring.
Afterwards, you view the metrics by using Monitoring in the
Google Cloud console, or by port forwarding the Service and using curl.
Creating a variable for Connect Agent's namespace
Connect Agent typically runs in the namespace gke-connect.
Connect Agent has a label, hub.gke.io/project. The HTTP server listens on
port 8080.
Create a variable, AGENT_NS, for the namespace:
AGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)
Replace the following:
KUBECONFIG: the kubeconfig file for your cluster
PROJECT_ID: the project ID
Exposing Connect Agent Deployment
Copy the following configuration to a YAML file named
gke-connect-agent.yaml. This configuration creates a Service,
gke-connect-agent, which exposes the Connect Agent Deployment.
CLUSTER_NAME is the of the Kubernetes cluster where
Connect Agent runs.
REGION is the location that is geographically close to where
your cluster runs. Choose a
Google Cloud zone that is
geographically close to where the cluster is physically located.
ZONE is the location near your on-prem datacenter.
Choose a Google Cloud zone that is geographically close to where traffic
flows.
This configuration creates two resources:
A ConfigMap, prom-to-sd-user-config, which declares several variables
for use by the Deployment
A Deployment, prometheus-to-monitoring, which runs prometheus-to-sd in
a single Pod.
apiVersion: v1
kind: ConfigMap
metadata:
name: prom-to-sd-user-config
data:
# The project that the Connect Agent uses. Accepts ID or number.
project: PROJECT_ID
# A name for the cluster, which shows up in Cloud Monitoring.
cluster_name: CLUSTER_NAME
# cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.
cluster_location: REGION
# A zone name to report (e.g. us-central1-a).
zone: ZONE
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-to-monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: prometheus-to-monitoring
template:
metadata:
labels:
run: prometheus-to-monitoring
spec:
containers:
- args:
- /monitor
# 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.
- --source=gke-connect-agent:http://gke-connect-agent:8080
- --monitored-resource-types=k8s
- --stackdriver-prefix=custom.googleapis.com
- --project-id=$(PROM_PROJECT)
- --cluster-name=$(PROM_CLUSTER_NAME)
- --cluster-location=$(PROM_CLUSTER_LOCATION)
- --zone-override=$(PROM_ZONE)
# A node name to report. This is a dummy value.
- --node-name=MyGkeConnectAgent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/creds/creds-gcp.json
- name: PROM_PROJECT
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: project
- name: PROM_CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_name
- name: PROM_CLUSTER_LOCATION
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_location
- name: PROM_ZONE
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: zone
image: gcr.io/google-containers/prometheus-to-sd:v0.7.1
imagePullPolicy: IfNotPresent
name: prometheus-to-monitoring
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/creds
name: creds-gcp
readOnly: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: creds-gcp
secret:
defaultMode: 420
# This secret is already set up for the Connect Agent.
secretName: creds-gcp
Apply the YAML file to the Connect Agent's namespace in your cluster, where
KUBECONFIG is the path to the cluster's kubeconfig file:
Connect Agent's metrics are prefixed with
custom.googleapis.com/gke-connect-agent/, where gke-connect-agent is
the string specified in the --source argument. For example,
custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total
cURL
In a shell, use kubectl to port forward the gke-connect-monitoring Service:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Export Connect Agent metrics to Cloud Monitoring\n\nThis page explains how to export Connect Agent metrics to\nCloud Monitoring from Google Distributed Cloud, GKE on AWS, or any other\nregistered Kubernetes cluster.\n\nOverview\n--------\n\nIn a Google Distributed Cloud or GKE on AWS cluster, Prometheus collects metrics and stores\nthem locally within the cluster. Registering a cluster outside Google Cloud to a fleet\ncreates a Deployment called Connect Agent in the cluster. Prometheus collects\nuseful metrics from Connect Agent, like errors connecting to Google and the\nnumber of open connections. To make these metrics available to\nCloud Monitoring, you must:\n\n- Expose the Connect Agent using a Service.\n- Deploy [`prometheus-to-sd`](https://github.com/GoogleCloudPlatform/k8s-stackdriver), a simple component that scrapes Prometheus metrics and exports them to Cloud Monitoring.\n\nAfterwards, you view the metrics by using Monitoring in the\nGoogle Cloud console, or by port forwarding the Service and using `curl`.\n\nCreating a variable for Connect Agent's namespace\n-------------------------------------------------\n\nConnect Agent typically runs in the namespace `gke-connect`.\n\nConnect Agent has a label, `hub.gke.io/project`. The HTTP server listens on\nport 8080.\n\nCreate a variable, `AGENT_NS`, for the namespace: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e: the kubeconfig file for your cluster\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the project ID\n\nExposing Connect Agent Deployment\n---------------------------------\n\n1. Copy the following configuration to a YAML file named\n `gke-connect-agent.yaml`. This configuration creates a Service,\n `gke-connect-agent`, which exposes the Connect Agent Deployment.\n\n ```\n apiVersion: v1\n kind: Service\n metadata:\n labels:\n app: gke-connect-agent\n name: gke-connect-agent\n spec:\n ports:\n - port: 8080\n protocol: TCP\n targetPort: 8080\n selector:\n app: gke-connect-agent\n type: ClusterIP\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f gke-connect-agent.yaml\n ```\n3. Bind the `roles/monitoring.metricWriter` IAM role to the fleet Google service account:\n\n ```\n gcloud projects add-iam-policy-binding PROJECT_ID \\\n --member=\"serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com\" \\\n --role=\"roles/monitoring.metricWriter\"\n ```\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT_NAME\u003c/var\u003e is the service account used when [registering the\n cluster](https://cloud.google.com/service-mesh/docs/register-cluster#creating_a_service_account_and_key_file).\n\nDeploying `prometheus-to-sd`\n----------------------------\n\n1. Copy the following configuration to a YAML file, named `prometheus-to-sd.yaml`\n where:\n\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e is the of the Kubernetes cluster where Connect Agent runs.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e is the location that is geographically close to where your cluster runs. Choose a [Google Cloud zone](/compute/docs/regions-zones#available) that is geographically close to where the cluster is physically located.\n - \u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e is the location near your on-prem datacenter. Choose a Google Cloud zone that is geographically close to where traffic flows.\n\n This configuration creates two resources:\n - A ConfigMap, `prom-to-sd-user-config`, which declares several variables for use by the Deployment\n - A Deployment, `prometheus-to-monitoring`, which runs `prometheus-to-sd` in a single Pod.\n\n ```\n apiVersion: v1\n kind: ConfigMap\n metadata:\n name: prom-to-sd-user-config\n data:\n # The project that the Connect Agent uses. Accepts ID or number.\n project: PROJECT_ID\n # A name for the cluster, which shows up in Cloud Monitoring.\n cluster_name: CLUSTER_NAME\n # cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.\n cluster_location: REGION\n # A zone name to report (e.g. us-central1-a).\n zone: ZONE\n ---\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: prometheus-to-monitoring\n spec:\n progressDeadlineSeconds: 600\n replicas: 1\n revisionHistoryLimit: 2\n selector:\n matchLabels:\n run: prometheus-to-monitoring\n template:\n metadata:\n labels:\n run: prometheus-to-monitoring\n spec:\n containers:\n - args:\n - /monitor\n # 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.\n - --source=gke-connect-agent:http://gke-connect-agent:8080\n - --monitored-resource-types=k8s\n - --stackdriver-prefix=custom.googleapis.com\n - --project-id=$(PROM_PROJECT)\n - --cluster-name=$(PROM_CLUSTER_NAME)\n - --cluster-location=$(PROM_CLUSTER_LOCATION)\n - --zone-override=$(PROM_ZONE)\n # A node name to report. This is a dummy value.\n - --node-name=MyGkeConnectAgent\n env:\n - name: GOOGLE_APPLICATION_CREDENTIALS\n value: /etc/creds/creds-gcp.json\n - name: PROM_PROJECT\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: project\n - name: PROM_CLUSTER_NAME\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_name\n - name: PROM_CLUSTER_LOCATION\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_location\n - name: PROM_ZONE\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: zone\n image: gcr.io/google-containers/prometheus-to-sd:v0.7.1\n imagePullPolicy: IfNotPresent\n name: prometheus-to-monitoring\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /etc/creds\n name: creds-gcp\n readOnly: true\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\n volumes:\n - name: creds-gcp\n secret:\n defaultMode: 420\n # This secret is already set up for the Connect Agent.\n secretName: creds-gcp\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f prometheus-to-sd.yaml\n ```\n\nViewing metrics\n---------------\n\n### Console\n\n1. Go to the Monitoring page in Google Cloud console.\n\n [Go to the Monitoring page](https://console.cloud.google.com/monitoring)\n2. From the left menu, click **Metrics Explorer**.\n\n3. Connect Agent's metrics are prefixed with\n `custom.googleapis.com/gke-connect-agent/`, where `gke-connect-agent` is\n the string specified in the `--source` argument. For example,\n `custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total`\n\n### cURL\n\n1. In a shell, use `kubectl` to port forward the `gke-connect-monitoring` Service:\n\n ```\n kubectl -n ${AGENT_NS} port-forward svc/gke-connect-monitoring 8080\n ```\n2. Open another shell, then run:\n\n ```\n curl localhost:8080/metrics\n ```\n\nCleaning up\n-----------\n\nTo delete the resources you created in this topic: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project)\nkubectl delete configmap prom-to-sd-user-config --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete service gke-connect-agent --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete deployment prometheus-to-monitoring --kubeconfig KUBECONFIG -n ${AGENT_NS}\n```"]]