Skip to content

Latest commit

 

History

History
82 lines (59 loc) · 2 KB

README.md

File metadata and controls

82 lines (59 loc) · 2 KB

Cloudflare Workers Prometheus Exporter

  1. Install

$ npm i workers-prometheus / $ pnpm add workers-prometheus

  1. Set up a prometheus registry
import { Registry } from 'workers-prometheus/client';
import { getPrometheusExporter } from 'workers-prometheus/server';
import type { PrometheusServer } from 'workers-prometheus/server';

export const PROMETHEUS = getPrometheusExporter();

Add a Durable object to your wrangler.toml (requires workers paid plan)

[[durable_objects.bindings]]
name = "PROMETHEUS"
class_name = "PROMETHEUS"

[[migrations]]
new_classes=["PROMETHEUS"]
tag = "v1"
  1. Write your worker!
export default {
	async fetch(request, env, ctx): Promise<Response> {
		const url = new URL(request.url);

		const REGISTRY = new Registry(env.PROMETHEUS, ctx);

		switch (url.pathname) {
			case '/metrics':
				return new Response(await REGISTRY.metrics());
			case '/flush':
				return new Response(await REGISTRY.clear());

			default:
				const counter = REGISTRY.counter('http_requests', 'Number of HTTP requests received');
				counter.inc({ method: request.method });

				const gauge = REGISTRY.gauge('my-gauge', 'an increasing and decreasing gauge');
				gauge.inc();

				const histogram = REGISTRY.histogram('examplecom_latency', 'Counts latency for getting data from example.com', [50, 100, 250, 500, 1000]);
				const time = Date.now();
				const resp = await fetch('https://example.com');
				const latency = Date.now() - time;
				histogram.observe(latency, { status: resp.status });

				return new Response('ok');
		}
	},
} satisfies ExportedHandler<{ PROMETHEUS: DurableObjectNamespace<PrometheusServer> }>;
  1. Deploy your worker
$ wrangler deploy
  1. Set up a prometheus scraper
scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['<worker_name>.<account>.workers.dev/metrics']

For a full example, see the example directory.

Planned features:

  • Automatically flushing data periodically (important when using histograms)