Please make sure to enable Application Default Credentials for authentication. Click here to do so.
+
+
+
+
+
+
+
+
+
+
diff --git a/codelabs/jaeger.md b/codelabs/jaeger.md
new file mode 100644
index 00000000..3312327d
--- /dev/null
+++ b/codelabs/jaeger.md
@@ -0,0 +1,54 @@
+author: Emmanuel Odeke and Henry Ventura
+summary: Setup and configure Jaeger
+environments: Web
+id: jaeger
+
+# Setup and Configure Jaeger
+
+## Overview of the tutorial
+Duration: 0:05
+
+This tutorial shows you how to setup and configure Jaeger
+
+
+
+Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including:
+
+Distributed context propagation
+Distributed transaction monitoring
+Root cause analysis
+Service dependency analysis
+Performance / latency optimization
+
+Requirements:
+
+* Docker, if you don't already have it, you can learn [How to install Docker](https://docs.docker.com/install/)
+
+## Downloading the Jaeger Docker image
+Duration: 0:01
+
+We'll get the Jaeger Docker image from https://hub.docker.com/u/jaegertracing/
+
+by
+
+```
+docker pull jaegertracing/all-in-one:latest
+```
+
+## Starting Jaeger
+Duration: 0.01
+
+```
+docker run -d --name jaeger \
+ -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
+ -p 5775:5775/udp \
+ -p 6831:6831/udp \
+ -p 6832:6832/udp \
+ -p 5778:5778 \
+ -p 16686:16686 \
+ -p 14268:14268 \
+ -p 9411:9411 \
+ jaegertracing/all-in-one:latest
+```
+
+and now the Jaeger user interface can be opened in your web browser by visiting [http://localhost:16686](http://localhost:16686/)
diff --git a/codelabs/jaeger/codelab.json b/codelabs/jaeger/codelab.json
new file mode 100644
index 00000000..3d7780cc
--- /dev/null
+++ b/codelabs/jaeger/codelab.json
@@ -0,0 +1,20 @@
+{
+ "environment": "web",
+ "source": "jaeger.md",
+ "format": "html",
+ "prefix": "../../",
+ "mainga": "UA-49880327-14",
+ "updated": "2018-07-21T18:00:01-07:00",
+ "id": "jaeger",
+ "duration": 2,
+ "title": "Setup and Configure Jaeger",
+ "author": "Emmanuel Odeke and Henry Ventura",
+ "summary": "Setup and configure Jaeger",
+ "theme": "",
+ "status": null,
+ "category": null,
+ "tags": [
+ "web"
+ ],
+ "url": "jaeger"
+}
diff --git a/codelabs/jaeger/img/a74b22a1a2749ae7.png b/codelabs/jaeger/img/a74b22a1a2749ae7.png
new file mode 100644
index 00000000..1e4753d0
Binary files /dev/null and b/codelabs/jaeger/img/a74b22a1a2749ae7.png differ
diff --git a/codelabs/jaeger/index.html b/codelabs/jaeger/index.html
new file mode 100644
index 00000000..78af84b3
--- /dev/null
+++ b/codelabs/jaeger/index.html
@@ -0,0 +1,97 @@
+
+
+
+
+
+
+
+
+ Setup and Configure Jaeger
+
+
+
+
+
+
+
+
+
+
+
+
This tutorial shows you how to setup and configure Jaeger
+
+
Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including:
+
Distributed context propagation Distributed transaction monitoring Root cause analysis Service dependency analysis Performance / latency optimization
and now the Jaeger user interface can be opened in your web browser by visiting http://localhost:16686
+
+
+
+
+
+
+
+
+
+
diff --git a/codelabs/prometheus.md b/codelabs/prometheus.md
new file mode 100644
index 00000000..c6a28e96
--- /dev/null
+++ b/codelabs/prometheus.md
@@ -0,0 +1,56 @@
+author: Emmanuel Odeke and Henry Ventura
+summary: Setup and configure Prometheus
+environments: Web
+id: prometheus
+
+# Setup and Configure Prometheus
+
+## Overview of the tutorial
+Duration: 0:01
+
+This tutorial shows you how to setup and configure Prometheus
+
+
+
+Prometheus is a monitoring system that collects metrics from systems, by scraping exposed endpoints at
+a regular interval. It evaluates rule expressions and displays results. It can also trigger alerts if
+alert conditions are met.
+
+Requirements:
+
+* An installation of Prometheus which you can get from here [Install Prometheus](https://prometheus.io/docs/introduction/first_steps/)
+
+## Configure Prometheus
+Duration: 0:02
+
+Prometheus Monitoring requires a system configuration usually in the form a ".yaml" file. For example, here is
+a sample "prometheus.yaml" file to scrape from our servers running at `localhost:9888`, `localhost:9988` and `localhost:9989`
+
+```
+global:
+ scrape_interval: 10s
+
+ external_labels:
+ monitor: 'media_search'
+
+scrape_configs:
+ - job_name: 'media_search'
+
+ scrape_interval: 10s
+
+ static_configs:
+ - targets: ['localhost:9888', 'localhost:9988', 'localhost:9989']
+```
+
+## Starting Prometheus
+Duration: 0.05
+
+Having successfully downloaded Prometheus and setup your config.yaml file, you should now be able to run
+```shell
+prometheus --config.file=prometheus.yaml
+```
+
+## Viewing Prometheus output
+Duration: 0.01
+
+You should now be able to navigate to [http://localhost:9090/](http://localhost:9090/)
diff --git a/codelabs/prometheus/codelab.json b/codelabs/prometheus/codelab.json
new file mode 100644
index 00000000..4d21c388
--- /dev/null
+++ b/codelabs/prometheus/codelab.json
@@ -0,0 +1,20 @@
+{
+ "environment": "web",
+ "source": "prometheus.md",
+ "format": "html",
+ "prefix": "../../",
+ "mainga": "UA-49880327-14",
+ "updated": "2018-07-21T15:57:19-07:00",
+ "id": "prometheus",
+ "duration": 2,
+ "title": "Setup and Configure Prometheus",
+ "author": "Emmanuel Odeke and Henry Ventura",
+ "summary": "Setup and configure Prometheus",
+ "theme": "",
+ "status": null,
+ "category": null,
+ "tags": [
+ "web"
+ ],
+ "url": "prometheus"
+}
diff --git a/codelabs/prometheus/img/452ecf341cd4cb4d.png b/codelabs/prometheus/img/452ecf341cd4cb4d.png
new file mode 100644
index 00000000..8027e538
Binary files /dev/null and b/codelabs/prometheus/img/452ecf341cd4cb4d.png differ
diff --git a/codelabs/prometheus/index.html b/codelabs/prometheus/index.html
new file mode 100644
index 00000000..642aa5ef
--- /dev/null
+++ b/codelabs/prometheus/index.html
@@ -0,0 +1,104 @@
+
+
+
+
+
+
+
+
+ Setup and Configure Prometheus
+
+
+
+
+
+
+
+
+
+
+
+
This tutorial shows you how to setup and configure Prometheus
+
+
Prometheus is a monitoring system that collects metrics from systems, by scraping exposed endpoints at a regular interval. It evaluates rule expressions and displays results. It can also trigger alerts if alert conditions are met.
+
Requirements:
+
+
An installation of Prometheus which you can get from here Install Prometheus
+
+
+
+
+
+
+
Prometheus Monitoring requires a system configuration usually in the form a ".yaml" file. For example, here is a sample "prometheus.yaml" file to scrape from our servers running at localhost:9888, localhost:9988 and localhost:9989
Please make sure to enable Application Default Credentials for authentication. Click here to do so.
+
+
+
+
+
+
+
+
+
+
diff --git a/codelabs/stackdriver.md b/codelabs/stackdriver.md
new file mode 100644
index 00000000..45f16249
--- /dev/null
+++ b/codelabs/stackdriver.md
@@ -0,0 +1,38 @@
+author: Emmanuel Odeke and Henry Ventura
+summary: Setup and configure Google Stackdriver
+environments: Web
+id: stackdriver
+
+# Setup and Configure Google Stackdriver
+
+## Overview of the tutorial
+Duration: 0:01
+
+This tutorial shows you how to setup and configure Google Stackdriver Tracing and Metrics.
+
+Requirements:
+
+* A cloud provider based project; it should support Stackdriver Monitoring and Tracing — we’ll use Google Cloud Platform for this example
+
+## Create a Project on Google Cloud
+Duration: 0:02
+
+If you haven't already created a project on Google Cloud, [you can do so here](https://console.cloud.google.com/projectcreate).
+
+## Enable the Stackdriver APIs
+Duration: 0:05
+
+You will be enabling these two APIs:
+
+* Stackdriver Monitoring API
+* Stackdriver Trace API
+
+[Enable APIs](https://console.cloud.google.com/apis/library?q=stackdriver)
+
+
+
+
+## Enable Application Default Credentials
+Duration: 0:02
+
+Please make sure to enable Application Default Credentials for authentication. [Click here](https://developers.google.com/identity/protocols/application-default-credentials) to do so.
diff --git a/codelabs/zipkin.md b/codelabs/zipkin.md
new file mode 100644
index 00000000..8259cc57
--- /dev/null
+++ b/codelabs/zipkin.md
@@ -0,0 +1,41 @@
+author: Emmanuel Odeke and Henry Ventura
+summary: Setup and configure Zipkin
+environments: Web
+id: zipkin
+
+# Setup and Configure Zipkin
+
+## Overview of the tutorial
+Duration: 0:05
+
+This tutorial shows you how to setup and configure Zipkin
+
+
+
+Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.
+
+It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper.
+
+Requirements:
+
+* Docker, if you don't already have it, you can learn [How to install Docker](https://docs.docker.com/install/)
+
+## Downloading the Zipkin Docker image
+Duration: 0:01
+
+We'll get the Zipkin Docker image from https://hub.docker.com/u/openzipkin/
+
+by
+
+```
+docker pull openzipkin/zipkin
+```
+
+## Starting Zipkin
+Duration: 0.01
+
+```
+docker run -d -p 9411:9411 openzipkin/zipkin
+```
+
+and now the Zipkin user interface can be opened in your web browser by visiting [http://localhost:9411](http://localhost:9411/)
diff --git a/codelabs/zipkin/codelab.json b/codelabs/zipkin/codelab.json
new file mode 100644
index 00000000..a2928a90
--- /dev/null
+++ b/codelabs/zipkin/codelab.json
@@ -0,0 +1,20 @@
+{
+ "environment": "web",
+ "source": "zipkin.md",
+ "format": "html",
+ "prefix": "../../",
+ "mainga": "UA-49880327-14",
+ "updated": "2018-07-21T17:10:56-07:00",
+ "id": "zipkin",
+ "duration": 2,
+ "title": "Setup and Configure Zipkin",
+ "author": "Emmanuel Odeke and Henry Ventura",
+ "summary": "Setup and configure Zipkin",
+ "theme": "",
+ "status": null,
+ "category": null,
+ "tags": [
+ "web"
+ ],
+ "url": "zipkin"
+}
diff --git a/codelabs/zipkin/img/2dc5a8ae4249fb14.png b/codelabs/zipkin/img/2dc5a8ae4249fb14.png
new file mode 100644
index 00000000..9bbd23f8
Binary files /dev/null and b/codelabs/zipkin/img/2dc5a8ae4249fb14.png differ
diff --git a/codelabs/zipkin/index.html b/codelabs/zipkin/index.html
new file mode 100644
index 00000000..35903a34
--- /dev/null
+++ b/codelabs/zipkin/index.html
@@ -0,0 +1,88 @@
+
+
+
+
+
+
+
+
+ Setup and Configure Zipkin
+
+
+
+
+
+
+
+
+
+
+
+
This tutorial shows you how to setup and configure Zipkin
+
+
Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.
+
It manages both the collection and lookup of this data. Zipkin's design is based on the Google Dapper paper.
+
+Content is licensed under Apache License 2.0
diff --git a/content/_header.md b/content/_header.md
new file mode 100644
index 00000000..8c45aa58
--- /dev/null
+++ b/content/_header.md
@@ -0,0 +1 @@
+[ OpenCensus](/)
diff --git a/content/_index.md b/content/_index.md
new file mode 100644
index 00000000..a817a70c
--- /dev/null
+++ b/content/_index.md
@@ -0,0 +1,42 @@
+---
+title: ""
+date: 2018-07-19T09:58:45-07:00
+draft: false
+class: "no-pagination no-top-border-header no-search max-text-width"
+---
+
+{{}}
+
+##### What is OpenCensus?
+
+{{}} is a vendor-agnostic single distribution of libraries to provide **metrics** collection and **tracing** for your microservices and monoliths alike.
+
+{{}}
+
+{{}}
+
+##### Can I use OpenCensus in my project?
+Our libraries support Go, Java, C++, Ruby, Erlang, Python, and PHP.
+
+Supported backends include Datadog, Instana, Jaeger, SignalFX, Stackdriver, and Zipkin. You can also [add support for other backends](/).
+
+{{}}
+
+{{}}
+
+##### Who is behind it?
+OpenCensus originates from Google, where a set of libraries called Census were used to automatically capture traces and metrics from services. Since going open source, the project is now composed of a group of cloud providers, application performance management vendors, and open source contributors. The project is hosted on GitHub and all work occurs there.
+
+{{}}
+
+{{}}
+
+##### What are *Metrics* and *Tracing*?
+
+[**Metrics**](/core-concepts/metrics) are any quantifiable piece of data that you would like to track, such as latency in a service or database, request content length, or number of open file descriptors. Viewing graphs of your metrics can help you understand and gauge the performance and overall quality of your application and set of services.
+
+[**Traces**](/core-concepts/tracing) show you how a request propagates throughout your application or set of services. Viewing graphs of your traces can help you understand the bottlenecks in your architecture by visualizing how data flows between all of your services.
+
+##### Partners & Contributors
+
+{{}}
diff --git a/content/about.md b/content/about.md
deleted file mode 100644
index 60bdef6c..00000000
--- a/content/about.md
+++ /dev/null
@@ -1,12 +0,0 @@
-+++
-title = "About"
-Description = "about"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-date = "2018-05-08T13:19:03+07:00"
-+++
-
-OpenCensus is being developed by a group of cloud providers, Application Performance Management vendors, and open source contributors. This project is hosted on [GitHub](https://github.com/census-instrumentation) and all work occurs there.
-
-OpenCensus was initiated by Google, and is based on instrumentation systems used inside of Google.
diff --git a/content/advanced-concepts/_index.md b/content/advanced-concepts/_index.md
new file mode 100644
index 00000000..4a534283
--- /dev/null
+++ b/content/advanced-concepts/_index.md
@@ -0,0 +1,12 @@
+---
+title: "Advanced Concepts"
+date: 2018-07-16T14:44:49-07:00
+draft: false
+weight: 80
+---
+
+OpenCensus provides observability throughout your microservices and monoliths alike.
+
+In this section, we'll cover some advanced topics such as:
+
+{{% children %}}
diff --git a/content/advanced-concepts/context-propagation.md b/content/advanced-concepts/context-propagation.md
new file mode 100644
index 00000000..62fdd8e0
--- /dev/null
+++ b/content/advanced-concepts/context-propagation.md
@@ -0,0 +1,18 @@
++++
+title = "Context Propagation"
+type = "leftnav"
++++
+
+This page contains several information about
+the context propagations used by OpenCensus.
+
+---
+
+### Table of contents
+- [B3 Propagation](#b3-propagation)
+- [CloudTrace](#cloudtrace)
+
+#### B3 Propagation
+"B3 propagation" aka "Zipkin Propagation"
+
+#### CloudTrace
diff --git a/content/troubleshooting.md b/content/advanced-concepts/troubleshooting.md
similarity index 81%
rename from content/troubleshooting.md
rename to content/advanced-concepts/troubleshooting.md
index 23f1a0a3..c9c2b862 100644
--- a/content/troubleshooting.md
+++ b/content/advanced-concepts/troubleshooting.md
@@ -20,13 +20,13 @@ the handler is violating the error rate SLO.
In order to debug, they query the tracing backend
to search for a trace where
-`inbox.Timeline.Messages` span is errored.
+inbox.Timeline.Messages span is errored.

-From the traces, it is visible that `mysql.Query` is
-consistently erroring for `inbox.Timeline.Messages`.
-The error is `INVALID_ARGUMENT`.
+From the traces, it is visible that mysql.Query is
+consistently erroring for inbox.Timeline.Messages.
+The error is INVALID_ARGUMENT.
Carefully investigating the code, they see that
there is an escaping problem with the MySQL query.
@@ -55,8 +55,8 @@ taking more than 100ms.
From the traces, it is visible that auth.AccessToken
is often retried when serving the /timeline endpoint.
-By looking at the logs with the NetOps team,
-they realize that there is a networking outage affecting
+By looking at the logs with the NetOps team and they
+realize that there is a networking outage affecting
the requests between the HTTP server and the auth service.
Operations fix the networking issue and retries
@@ -70,20 +70,20 @@ Latency returns back to normal.
Operations get an alert for increasing latency for
the `inbox.Timeline.Messages` RPC handler.
Team looks at the dashboards and confirms that
-`inbox.Timeline.Messages` handler is violating the latency SLO.
+inbox.Timeline.Messages handler is violating the latency SLO.

-They search for a trace where `inbox.Timeline.Messages`
+They search for a trace where inbox.Timeline.Messages
span is taking more than 100ms and violating the SLOs.

-They realize for all cases `inbox.Timeline.Messages`
+They realize for all cases inbox.Timeline.Messages
are querying the database and never see any cache hits.
By looking at the handler source code, they see
the latest development push mistakenly removed the
-`cache.Put` call after database is queries.
+cache.Put call after database is queries.
They roll back the new release and fix the bug.
diff --git a/content/blog.md b/content/blog.md
deleted file mode 100644
index c423b8cb..00000000
--- a/content/blog.md
+++ /dev/null
@@ -1,187 +0,0 @@
-+++
-title = "Blog"
-+++
-
-
-##### July 11, 2018
-
-#### [Monitoring HTTP Latency with OpenCensus and Stackdriver](https://medium.com/google-cloud/monitoring-http-latency-with-opencensus-and-stackdriver-bf561608e81a)
-
-This post will describe how to code your own monitoring probes, similar in function to Stackdriver uptime checks on Google Cloud. Code, configuration files, commands, and detailed instructions… [\[read more\]](https://medium.com/google-cloud/monitoring-http-latency-with-opencensus-and-stackdriver-bf561608e81a)
-
----
-
-##### June 13, 2018
-
-#### [Redis clients instrumented by OpenCensus in Java and Go](https://medium.com/@orijtech/redis-clients-instrumented-by-opencensus-in-java-and-go-402470d92c5c)
-
-In this post we’ll examine Redis clients instrumented with OpenCensus in Java and Go, and apply them directly to use Redis in a media search service to alleviate costs, throttling, load… [\[read more\]](https://medium.com/@orijtech/redis-clients-instrumented-by-opencensus-in-java-and-go-402470d92c5c)
-
----
-
-##### June 13, 2018
-
-#### [Microsoft joins the OpenCensus project](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/)
-
-We are happy to announce that Microsoft is joining the open source OpenCensus project, and they are excited to help it achieve the goal of “a single distribution of libraries for metrics and distributed tracing”… [\[read more\]](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/)
-
----
-
-##### June 12, 2018
-
-#### [Hit the Ground Running with Distributed Tracing Core Concepts](https://medium.com/nikeengineering/hit-the-ground-running-with-distributed-tracing-core-concepts-ff5ad47c7058)
-
-Wondering what Distributed Tracing is, or having trouble making it work? Understanding its core concepts is a critical first step. Monolithic service architectures for large backend applications… [\[read more\]](https://medium.com/nikeengineering/hit-the-ground-running-with-distributed-tracing-core-concepts-ff5ad47c7058)
-
----
-
-##### May 13, 2018
-
-#### [OpenCensus for Go gRPC developers](https://medium.com/@orijtech/opencensus-for-go-grpc-developers-7f3ee1ac3d6d)
-
-In this tutorial, we’ll examine how to use OpenCensus in your gRPC projects in the Go programming language for observability both into your server and then client! We’ll then examine how we can integrate with… [\[read more\]](https://medium.com/@orijtech/opencensus-for-go-grpc-developers-7f3ee1ac3d6d)
-
----
-
-##### May 7, 2018
-
-#### [OpenCensus’s journey ahead: platforms and languages](https://opensource.googleblog.com/2018/05/opencensus-journey-ahead-part-1.html)
-
-We recently blogged about the value of OpenCensus and how Google uses Census internally. Today, we want to share more about our long-term vision for OpenCensus. The goal of OpenCensus is to… [\[read more\]](https://opensource.googleblog.com/2018/05/opencensus-journey-ahead-part-1.html)
-
----
-
-##### May 4, 2018
-
-#### [Practical and Useful Latency Analysis using Istio and OpenCenus](https://www.youtube.com/watch?v=ME-EhOKqFOY)
-
-Want to view more sessions and keep the conversations going? Join us for KubeCon + CloudNativeCon North America in Seattle, December 11 - 13, 2018 [\[read more\]](https://www.youtube.com/watch?v=ME-EhOKqFOY)
-
----
-
-##### April 30, 2018
-
-#### [Memcached clients instrumented with OpenCensus in Go and Python](https://medium.com/@orijtech/memcached-clients-instrumented-with-opencensus-in-go-and-python-dacbd01b269c)
-
-One of the most used server caching and scaling technologies is Memcached [http://memcached.org/](http://memcached.org/) Memcached is a high performance, free and open… [\[read more\]](https://medium.com/@orijtech/memcached-clients-instrumented-with-opencensus-in-go-and-python-dacbd01b269c)
-
----
-
-##### April 18, 2018
-
-#### [“Hello, world!” for web servers in Go with OpenCensus](https://medium.com/@orijtech/hello-world-for-web-servers-in-go-with-opencensus-29955b3f02c6)
-
-In this post, we’ll examine how you can minimally add distributed tracing and monitoring to your web server written in Go, with OpenCensus [https://opencensus.io](https://opencensus.io) [\[read more\]](https://medium.com/@orijtech/hello-world-for-web-servers-in-go-with-opencensus-29955b3f02c6)
-
----
-
-##### April 4, 2018
-
-#### [Twitter - Bob Quillin](https://twitter.com/bobquillin/status/981571739720167425)
-
-Great OpenCensus 101 explaining its role in microservices stats collection, distributed tracing, and support for multiple backends [https://storage.googleapis.com/opencensusio/OpenCensusVideo.mp4](https://storage.googleapis.com/opencensusio/OpenCensusVideo.mp4) [\[read more\]](https://twitter.com/bobquillin/status/981571739720167425)
-
----
-
-
-##### April 3, 2018
-
-#### [Tracing all the Fn things with OpenCensus](https://medium.com/fnproject/tracing-all-the-fn-things-with-opencensus-e579b268aeca)
-
-The Fn Project has decided to join all the other cool kids and become obsessed with tracing. Congratulations people, your meetup talks did not fall entirely on deaf ears. We’re really excited about moving all of our metrics… [\[read more\]](https://medium.com/fnproject/tracing-all-the-fn-things-with-opencensus-e579b268aeca)
-
----
-
-##### April 3, 2018
-
-#### [Twitter - Chad Arimura](https://twitter.com/chadarimura/status/981319453282467840)
-
-“Tracing all the Fn things with OpenCensus” by @rdallman10 [https://medium.com/fnproject/tracing-all-the-fn-things-with-opencensus-e579b268aeca](https://medium.com/fnproject/tracing-all-the-fn-things-with-opencensus-e579b268aeca) … #serverless #golang cc @opencensusio @CloudNativeFdn [\[read more\]](https://medium.com/fnproject/tracing-all-the-fn-things-with-opencensus-e579b268aeca)
-
----
-
-##### April 3, 2018
-
-#### [Twitter - Bruce MacVarish](https://twitter.com/brucemacv/status/981324918330744833)
-
-We’re really excited about moving all of our metrics and trace reporting to OpenCensus. #serverless #cloudnative @OracleIaaS [\[read more\]](https://twitter.com/brucemacv/status/981324918330744833)
-
----
-
-##### March 20, 2018
-
-#### [Measure Once — Export Anywhere: OpenCensus in the wild](https://blog.doit-intl.com/measure-once-export-anywhere-opencensus-in-the-wild-61724f44eb00)
-
-A few months ago, Google has announced OpenCensus, a vendor-neutral open source library for telemetry and tracing collection. [\[read more\]](https://blog.doit-intl.com/measure-once-export-anywhere-opencensus-in-the-wild-61724f44eb00)
-
----
-
-##### March 14, 2018
-
-#### [OpenCensus with Morgan McLean and JBD](https://www.gcppodcast.com/post/episode-118-opencensus-with-morgan-mclean-and-jbd/)
-
-Product Manager Morgan McLean and Software Engineer JBD join Melanie and Mark this week to discuss the new open source project OpenCensus, a single distribution of libraries for metrics and… [\[read more\]](https://www.gcppodcast.com/post/episode-118-opencensus-with-morgan-mclean-and-jbd/)
-
----
-
-##### March 13, 2018
-
-#### [The value of OpenCensus](https://opensource.googleblog.com/2018/03/the-value-of-opencensus.html)
-
-This post is the second in a series about OpenCensus. You can find the first post here. Early this year we open sourced OpenCensus, a distributed tracing and stats instrumentation framework. [\[read more\]](https://opensource.googleblog.com/2018/03/the-value-of-opencensus.html)
-
----
-
-##### March 12, 2018
-
-#### [OpenCensus Tracing w/ Stackdriver](https://medium.com/google-cloud/opencensus-tracing-w-stackdriver-a079fae52499)
-
-A customer’s engineers asked how they could combine OpenCensus tracing w/ App Engine Flexible in their Java apps and surface the results in Stackdriver. [\[read more\]](https://medium.com/google-cloud/opencensus-tracing-w-stackdriver-a079fae52499)
-
----
-
-##### March 8, 2018
-
-#### [Cloud Spanner, instrumented by OpenCensus and exported to Stackdriver](https://medium.com/@orijtech/cloud-spanner-instrumented-by-opencensus-and-exported-to-stackdriver-6ed61ed6ab4e)
-
-In this post, we’ll explore the power of OpenCensus’ exporters, using the Google Cloud Spanner package for both the Go and Java programming languages. [\[read more\]](https://medium.com/@orijtech/cloud-spanner-instrumented-by-opencensus-and-exported-to-stackdriver-6ed61ed6ab4e)
-
----
-
-##### March 7, 2018
-
-#### [How Google uses Census internally](https://opensource.googleblog.com/2018/03/how-google-uses-opencensus-internally.html)
-
-This post is the first in a series about OpenCensus, a set of open source instrumentation libraries based on what we use inside Google. This series will cover the benefits of OpenCensus for developers and vendors… [\[read more\]](https://opensource.googleblog.com/2018/03/how-google-uses-opencensus-internally.html)
-
----
-
-##### February 18, 2018
-
-#### [What is distributed tracing. Zoom on opencensus and opentracing](https://gianarb.it/blog/what-is-distributed-tracing-opentracing-opencensus)
-
-A few months ago I started to actively study, support and use opentracing and more in general the distributed tracing topic. In this article, I will share something about… [\[read more\]](https://gianarb.it/blog/what-is-distributed-tracing-opentracing-opencensus)
-
----
-
-##### January 23, 2018
-
-#### [OpenCensus — towards harmonizing your Instrumentation](http://dieswaytoofast.blogspot.com/2018/01/opencensustowards-harmonizing-your.html)
-
-You’ve really gotten into this whole Observability thing, and have started plugging Prometheus into, well, everything that doesn’t already have it. [\[read more\]](http://dieswaytoofast.blogspot.com/2018/01/opencensustowards-harmonizing-your.html)
-
----
-
-##### January 18, 2018
-
-#### [OpenCensus with Prometheus and Kubernetes](https://kausal.co/blog/opencensus-prometheus-kausal/)
-
-Yesterday, Google announced OpenCensus, an instrumentation framework for monitoring and tracing. It comes with a set of client libraries for Golang and Java, with more to come. [\[read more\]](https://kausal.co/blog/opencensus-prometheus-kausal/)
-
----
-
-##### January 17, 2018
-
-#### [OpenCensus: A Stats Collection and Distributed Tracing Framework](https://opensource.googleblog.com/2018/01/opencensus.html)
-
-Today we’re pleased to announce the release of OpenCensus, a vendor-neutral open source library for metric collection and tracing. [\[read more\]](https://opensource.googleblog.com/2018/01/opencensus.html)
diff --git a/content/codelabs/_index.html b/content/codelabs/_index.html
new file mode 100644
index 00000000..4656fa0c
--- /dev/null
+++ b/content/codelabs/_index.html
@@ -0,0 +1,6 @@
+---
+title: "Codelabs"
+date: 2018-07-19T13:41:01-07:00
+draft: false
+hidden: true
+---
diff --git a/content/codelabs/googlecloudstorage.md b/content/codelabs/googlecloudstorage.md
new file mode 100644
index 00000000..63807719
--- /dev/null
+++ b/content/codelabs/googlecloudstorage.md
@@ -0,0 +1,7 @@
+---
+title: "Google Cloud Storage"
+date: 2018-07-23T23:10:00-07:00
+draft: false
+hidden: true
+layout: googlecloudstorage
+---
diff --git a/content/codelabs/jaeger.md b/content/codelabs/jaeger.md
new file mode 100644
index 00000000..a2a9cde0
--- /dev/null
+++ b/content/codelabs/jaeger.md
@@ -0,0 +1,7 @@
+---
+title: "Jaeger"
+date: 2018-07-19T18:01:30-11:00
+draft: false
+hidden: true
+layout: jaeger
+---
diff --git a/content/codelabs/prometheus.md b/content/codelabs/prometheus.md
new file mode 100644
index 00000000..85a04905
--- /dev/null
+++ b/content/codelabs/prometheus.md
@@ -0,0 +1,7 @@
+---
+title: "Prometheus"
+date: 2018-07-19T15:08:00-07:00
+draft: false
+hidden: true
+layout: prometheus
+---
diff --git a/content/codelabs/spanner.md b/content/codelabs/spanner.md
new file mode 100644
index 00000000..c9881570
--- /dev/null
+++ b/content/codelabs/spanner.md
@@ -0,0 +1,7 @@
+---
+title: "Spanner"
+date: 2018-07-23T22:23:00-07:00
+draft: false
+hidden: true
+layout: spanner
+---
diff --git a/content/codelabs/stackdriver.md b/content/codelabs/stackdriver.md
new file mode 100644
index 00000000..3df5cc36
--- /dev/null
+++ b/content/codelabs/stackdriver.md
@@ -0,0 +1,7 @@
+---
+title: "Stackdriver"
+date: 2018-07-19T15:08:00-07:00
+draft: false
+hidden: true
+layout: stackdriver
+---
diff --git a/content/codelabs/zipkin.md b/content/codelabs/zipkin.md
new file mode 100644
index 00000000..2047d0ee
--- /dev/null
+++ b/content/codelabs/zipkin.md
@@ -0,0 +1,7 @@
+---
+title: "Zipkin"
+date: 2018-07-19T17:05:00-26:00
+draft: false
+hidden: true
+layout: zipkin
+---
diff --git a/content/community.md b/content/community.md
deleted file mode 100644
index 38288474..00000000
--- a/content/community.md
+++ /dev/null
@@ -1,26 +0,0 @@
-+++
-title = "Community"
-Description = "community"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-date = "2018-05-30T13:47:11-05:00"
-+++
-
-OpenCensus has an active community of developers who are using, enhancing and building valuable integrations with other software projects. We’d love your help to improve and extend the project. You can reach us via [Email](mailto:census-developers@googlegroups.com), [Gitter channel](https://gitter.im/census-instrumentation/Lobby) or [Twitter](https://twitter.com/opencensusio) to start engaging with the project and its members.
-
----
-
-#### Contribute on Github
-OpenCensus has an active community of developers who are using, enhancing and building valuable integrations with other software projects. We are always looking for active contributors in OpenCensus and OpenCensus Ecosystem. We would appreciate and love any community contributions to the OpenCensus project. Here is the link for development of [OpenCensus on GitHub](https://github.com/census-instrumentation). We look forward to your contributions.
-
-
----
-
-#### Mailing List
-Any questions or suggestions? You can reach us via email: [census-developers@googlegroups.com](mailto:census-developers@googlegroups.com). Just want to be in the loop of what is going on with the project? Join the [mailing list](https://groups.google.com/forum/#!forum/census-developers).
-
----
-
-#### Gitter Channel
-Join other developers and users on the OpenCensus [Gitter channel](https://gitter.im/census-instrumentation/Lobby).
diff --git a/content/community/_index.md b/content/community/_index.md
new file mode 100644
index 00000000..dc96f5dd
--- /dev/null
+++ b/content/community/_index.md
@@ -0,0 +1,18 @@
+---
+title: "Community"
+date: 2018-07-16T14:46:21-07:00
+draft: false
+weight: 110
+---
+
+OpenCensus has an active community of developers who are using, enhancing and building valuable integrations with other software projects. We’d love your help to improve and extend the project. You can reach us via [Email](mailto:census-developers@googlegroups.com), [Gitter channel](https://gitter.im/census-instrumentation/Lobby) or [Twitter](https://twitter.com/opencensusio) to start engaging with the project and its members.
+
+#### Contribute on Github
+OpenCensus has an active community of developers who are using, enhancing and building valuable integrations with other software projects. We are always looking for active contributors in OpenCensus and OpenCensus Ecosystem. We would appreciate and love any community contributions to the OpenCensus project. Here is the link for development of [OpenCensus on GitHub](https://github.com/census-instrumentation). We look forward to your contributions.
+
+
+#### Mailing List
+Any questions or suggestions? You can reach us via email: [census-developers@googlegroups.com](mailto:census-developers@googlegroups.com). Just want to be in the loop of what is going on with the project? Join the [mailing list](https://groups.google.com/forum/#!forum/census-developers).
+
+#### Gitter Channel
+Join other developers and users on the OpenCensus [Gitter channel](https://gitter.im/census-instrumentation/Lobby).
diff --git a/content/core-concepts/_index.md b/content/core-concepts/_index.md
new file mode 100644
index 00000000..942ad75f
--- /dev/null
+++ b/content/core-concepts/_index.md
@@ -0,0 +1,12 @@
+---
+title: "Core Concepts"
+date: 2018-07-16T14:28:33-07:00
+draft: false
+weight: 30
+---
+
+In this section we will walk through the core technical components of OpenCensus and what problems they solve. If these concepts look confusing -- don't worry! You will come out of this section with a rudimentary understanding of what tools are available to you and how to use them to solve your challenges.
+
+{{% children %}}
+
+If you haven't already, consider giving the [Quickstart](/quickstart) a try.
diff --git a/content/core-concepts/exporters.md b/content/core-concepts/exporters.md
new file mode 100644
index 00000000..baf422af
--- /dev/null
+++ b/content/core-concepts/exporters.md
@@ -0,0 +1,99 @@
+---
+title: "Exporters"
+date: 2018-07-16T14:28:45-07:00
+draft: false
+---
+
+Data collected by OpenCensus can be exported to any analysis tool or storage backend.
+OpenCensus exporters can be contributed by anyone, and we provide support for several
+open source backends and vendors out-of-the-box.
+
+Once you choose your backend, follow the instructions to initialize an exporter.
+Then, register the initialized exporter.
+
+#### Stats
+
+As an example, a Prometheus exporter is registered and Prometheus is going to scrape
+`:9091` to read the collected data:
+
+{{}}
+ {{}}
+import (
+ "go.opencensus.io/exporter/prometheus"
+ "go.opencensus.io/stats/view"
+)
+
+func main() {
+ exporter, err := prometheus.NewExporter(prometheus.Options{Namespace: "demo"})
+ if err != nil {
+ log.Fatal(err)
+ }
+ view.RegisterExporter(exporter)
+
+ // In a seperate go routine, run the Prometheus metrics scraping handler
+ go func() {
+ http.Handle("/metrics", exporter)
+ log.Fatal(http.ListenAndServe(":9091", nil))
+ }()
+ // ... continue with your code
+}
+ {{}}
+
+ {{}}
+// Add the dependencies by following the instructions at
+// https://github.com/census-instrumentation/opencensus-java/tree/master/exporters/stats/prometheus
+
+PrometheusStatsCollector.createAndRegister();
+
+// Uses a simple Prometheus HTTPServer to export metrics.
+io.prometheus.client.exporter.HTTPServer server =
+ new HTTPServer("localhost", 9091, true);
+ {{}}
+{{}}
+
+#### Traces
+
+As an example, a Zipkin exporter is registered. All collected trace data will be reported
+to the registered Zipkin endpoint:
+
+{{}}
+ {{}}
+import (
+ openzipkin "github.com/openzipkin/zipkin-go"
+ "github.com/openzipkin/zipkin-go/reporter/http"
+ "go.opencensus.io/exporter/zipkin"
+ "go.opencensus.io/trace"
+)
+
+localEndpoint, err := openzipkin.NewEndpoint("example-server", "192.168.1.5:5454")
+if err != nil {
+ log.Println(err)
+}
+reporter := http.NewReporter("http://localhost:9411/api/v2/spans")
+defer reporter.Close()
+
+exporter := zipkin.NewExporter(reporter, localEndpoint)
+trace.RegisterExporter(exporter)
+ {{}}
+
+ {{}}
+// Add the dependencies by following the instructions
+// at https://github.com/census-instrumentation/opencensus-java/tree/master/exporters/trace/zipkin
+
+ZipkinTraceExporter.createAndRegister(
+ "http://localhost:9411/api/v2/spans", "example-server");
+ {{}}
+{{}}
+
+Exporters can be registered dynamically and unregistered. But most users will register
+an exporter in their main application and never unregister it.
+
+Libraries instrumented with OpenCensus should not register exporters. Exporters should
+only be registered by main applications.
+
+#### Supported Backends
+
+T Backend supports Tracing
+
+S Backend supports Stats
+{{}}
diff --git a/content/core-concepts/metrics.md b/content/core-concepts/metrics.md
new file mode 100644
index 00000000..4fe51a32
--- /dev/null
+++ b/content/core-concepts/metrics.md
@@ -0,0 +1,90 @@
+---
+title: "Metrics"
+date: 2018-07-16T14:28:40-07:00
+draft: false
+---
+
+Application and request metrics are important indicators
+of availability. Custom metrics can provide insights into
+how availability indicators impact user experience or the business.
+Collected data can help automatically
+generate alerts at an outage or trigger better scheduling
+decisions to scale up a deployment automatically upon high demand.
+
+
+
+Stats collection allows users to collect custom metrics and provide
+a set of predefined metrics through the framework integrations.
+Collected data can be multidimensional and
+it can be filtered and grouped by [tags](/tags).
+
+Stats collection requires two steps:
+
+* Definition of measures and recording of data points.
+* Definition and registration of views to aggregate the recorded values.
+
+#### Measures
+
+A measure represents a metric type to be recorded. For example, request latency
+in µs and request size in KBs are examples of measures to collect from a server.
+All measures are identified by a name and also have a description and a unit.
+Libraries and frameworks can define and export measures for their end users to
+collect data on the provided measures.
+
+Below, there is an example measure that represents HTTP latency in ms:
+
+```go
+RequestLatecy = {
+ "http/request_latency",
+ "HTTP request latency in microseconds",
+ "microsecs",
+}
+```
+
+#### Recording
+Measurement is a data point to be collected for a measure. For example, for a latency (ms) measure, 100 is a measurement that represents a 100 ms latency event. Users collect data points on the existing measures with the current context. Tags from the current context are recorded with the measurements if they are any.
+
+Recorded measurements are dropped if user is not aggregating them via views. Users don’t necessarily need to conditionally enable/disable recording to reduce cost. Recording of measurements is cheap.
+
+Libraries can record measurements and provide measures,
+and end-users can later decide on which measures
+they want to collect.
+
+#### Views
+
+In order to aggregate measurements and export, users need to define views.
+A view allows recorded measurements to be aggregated with a one of the
+aggregation methods set by the user cumulatively.
+All recorded measurements is broken down by user-provided [tag](/core-concepts/tags) keys.
+
+Following aggregation methods are supported:
+
+* **Count**: The count of the number of measurement points.
+* **Distribution**: Histogram distribution of the measurement points.
+* **Sum**: A sum up of the measurement points.
+* **LastValue**: Keeps the last recorded value, drops everything else.
+
+Users can dynamically create and delete views at runtime. Libraries may
+export their own views and claim the view names by registering them.
+
+#### Sampling
+
+Stats are NOT sampled to be able to represent uncommon
+cases. For example, a [99th percentile latency issue](https://www.youtube.com/watch?v=lJ8ydIuPFeU)
+is rare. Combined with a low sampling rate,
+it might be hard to capture it. This is why stats are not sampled.
+
+On the other hand, exporting every individual measurement would
+be very expensive in terms of network traffic. This is why stats
+collection aggregates data in the process and exports only the
+aggregated data.
+
+#### Exporting
+
+Collected and aggregated data can be exported to a stats collection
+backend by registering an exporter.
+
+Multiple exporters can be registered to upload the data to various different backends.
+Users can unregister the exporters if they no longer are needed.
+
+See [exporters](/core-concepts/exporters) to learn more.
diff --git a/content/tags.md b/content/core-concepts/tags.md
similarity index 80%
rename from content/tags.md
rename to content/core-concepts/tags.md
index e049270a..20485b83 100644
--- a/content/tags.md
+++ b/content/core-concepts/tags.md
@@ -1,7 +1,10 @@
-+++
-title = "Tags"
-type = "leftnav"
-+++
+---
+title: "Tags"
+date: 2018-07-18T10:39:08-07:00
+draft: false
+class: "shadow-images"
+---
+
Tags allow us to associate contextual key-value pairs with collected data.
@@ -11,21 +14,21 @@ cases in isolation even in highly complex systems.
Tags have keys and values. Examples of tags:
-* {frontend: "web-0.12"}
-* {frontend: "ios-10.2.12"}
-* {http_endpoint: "server:8909/api/users"}
+* `{frontend: "web-0.12"}`
+* `{frontend: "ios-10.2.12"}`
+* `{http_endpoint: "server:8909/api/users"}`
Frontend tag allow users to breakdown the data between web, iOS and Android users.
HTTP endpoint allows to filter or group by a specific endpoint when
looking at the HTTP latency data.
-## Propagation
+#### Propagation
Tags may be defined in one service and used in a view in a different
service; they are propagated on the wire.
-In distributed systems, a service is likely to be depending
-on many other services.
+In distributed systems, a service is likely to be depending
+on many of other services.
This results in many challenges when analyzing the data
collected at the lower ends of the stack.
Instrumentation at lower layer might not be very valuable
@@ -37,7 +40,7 @@ Instead, we use tag propagation.
Higher level services produce tags, and lower-end uses them when
recording data.
-
+
Above, frontend depends on the authentication service. Authentication
service needs to query database that depends on the lower-level
diff --git a/content/core-concepts/tracing.md b/content/core-concepts/tracing.md
new file mode 100644
index 00000000..8d0b83a0
--- /dev/null
+++ b/content/core-concepts/tracing.md
@@ -0,0 +1,136 @@
+---
+title: "Tracing"
+date: 2018-07-16T14:28:37-07:00
+draft: false
+class: "shadow-images"
+---
+
+
+A trace tracks the progression of a single user request
+as it is handled by other services that make up an application.
+
+Each unit work is called a span in a trace. Spans include metadata about the work,
+including the time spent in the step (latency) and status.
+You can use tracing to debug errors and
+latency issues of your application.
+
+#### Spans
+
+A trace is a tree of spans.
+
+A span is the unit work represented in a trace. A span may
+represent a HTTP request, an RPC, a server handler,
+a database query or a section customly marked in user code.
+
+
+
+Above, you see a trace with various spans. In order to respond
+to `/messages`, several other internal requests are made. First,
+we are checking if the user is authenticated, we are trying to
+get the results from the cache. It is a cache miss, hence we
+query the database for the results, we cache the results back,
+and respond back to the user.
+
+There are two types of spans:
+
+* **Root span**: Root spans don't have a parent span. They are the
+ first span. `/messages` span above is a root span.
+* **Child span**: Child spans have an existing span as their parent.
+
+
+Spans are identified with an ID and are associated to a trace.
+These identifiers and options byte together are called span context.
+Inside the same process, span context is propagated in a context
+object. When crossing process boundaries, it is serialized into
+protocol headers. The receiving end can read the span context
+and create child spans.
+
+#### Name
+
+Span names represent what span does. Span names should
+be statistically meaningful. Most tracing backend and analysis
+tools use span names to auto generate reports for the
+represented work.
+
+Examples of span names:
+
+* "cache.Get" represents the Get method of the cache service.
+* "/messages" represents the messages web page.
+* "/api/user/(\\d+)" represents the user detail pages.
+
+#### Status
+
+Status represents the current state of the span.
+It is represented by a canonical status code which maps onto a
+predefined set of error values and an optional string message.
+
+Status allows tracing visualization tools to highlight
+unsuccessful spans and helps tracing users to debug errors.
+
+
+
+Above, you can see `cache.Put` is errored because of the
+violation of the key size limit. As a result of this error,
+ `/messages` request responded with an error to the user.
+
+#### Annotations
+
+Annotations are timestamped strings with optional attributes.
+Annotations are used like log lines, but the log is per-Span.
+
+Example annotations:
+
+* 0.001s Evaluating database failover rules.
+* 0.002s Failover replica selected. attributes:{replica:ab_001 zone:xy}
+* 0.006s Response received.
+* 0.007s Response requires additional lookups. attributes:{fanout:4}
+
+Annotations provide rich details to debug problems in the scope of a span.
+
+#### Attributes
+
+Attributes are additional information that is included in the
+span which can represent arbitrary data assigned by the user.
+They are key value pairs with the key being a string and the
+value being either a string, boolean, or integer.
+
+Examples of attributes:
+
+* {http_code: 200}
+* {zone: "us-central2"}
+* {container_id: "replica04ed"}
+
+Attributes can be used to query the tracing data and allow
+users to filter large volume tracing data. For example, you can
+filter the traces by HTTP status code or availability zone by
+using the example attributes above.
+
+#### Sampling
+
+Trace data is often very large in size and is expensive to collect.
+This is why rather than collecting traces for every request, downsampling
+is prefered. By default, OpenCensus provides a probabilistic sampler that
+will trace once in every 10,000 requests.
+
+You can set a custom probablistic sampler, prefer to always sample or
+not sample at all.
+There are two ways to set samplers:
+
+* **Global sampler**: Global sampler is the global default.
+* **Span sampler**: When starting a new span, a custom
+ sampler can be provided. If no custom sampling is
+ provided, global sampler is used. Span samplers are
+ useful if you want to over-sample some sections of your
+ code. For example, a low throughput background service
+ may use a higher sampling rate than a high-load RPC
+ server.
+
+#### Exporting
+
+Recorded spans will be reported by the registered exporters.
+
+Multiple exporters can be registered to upload the data to
+various different backends. Users can unregister the exporters
+if they no longer are needed.
+
+See [exporters](/core-concepts/exporters) to learn more.
diff --git a/content/core-concepts/z-pages.md b/content/core-concepts/z-pages.md
new file mode 100644
index 00000000..7375abee
--- /dev/null
+++ b/content/core-concepts/z-pages.md
@@ -0,0 +1,74 @@
+---
+title: "zPages"
+date: 2018-07-16T14:28:48-07:00
+draft: false
+class: "shadow-images"
+---
+
+OpenCensus provides in-process web pages that displays
+collected data from the process. These pages are called zPages
+and they are useful to see collected data from a specific process
+without having to depend on any metric collection or
+distributed tracing backend.
+
+zPages can be useful during the development time or when
+the process to be inspected is known in production.
+zPages can also be used to debug [exporter](/core-concepts/exporters) issues.
+
+In order to serve zPages, register their handlers and
+start a web server. Below, there is an example how to
+serve these pages from `127.0.0.1:7777/debug`.
+
+{{}}
+ {{}}
+import (
+ "log"
+ "net/http"
+
+ "go.opencensus.io/zpages"
+)
+
+func main() {
+ // Using the default serve mux, but you can create your own
+ mux := http.DefaultServeMux
+ zpages.Handle(mux, "/debug")
+ log.Fatal(http.ListenAndServe("127.0.0.1:7777", mux))
+}
+ {{}}
+
+ {{}}
+// Add the dependencies by following the instructions at
+// https://github.com/census-instrumentation/opencensus-java/tree/master/contrib/zpages
+
+ZPageHandlers.startHttpServerAndRegisterAll(7777);
+ {{}}
+{{}}
+
+Once handler is registered, there are various pages provided
+from the libraries:
+
+* [127.0.0.1:7777/debug/rpcz](http://127.0.0.1:7777/debug/rpcz)
+* [127.0.0.1:7777/debug/tracez](http://127.0.0.1:7777/debug/tracez)
+
+#### /rpcz
+
+/rpcz serves stats about sent and received RPCs. For example at [/rpcz](http://127.0.0.1:7777/debug/rpcz)
+
+Available stats include:
+
+* Number of RPCs made per minute, hour and in total.
+* Average latency in the last minute, hour and since the process started.
+* RPCs per second in the last minute, hour and since the process started.
+* Input payload in KB/s in the last minute, hour and since the process started.
+* Output payload in KB/s in the last minute, hour and since the process started.
+* Number of RPC errors in the last minute, hour and in total.
+
+#### /tracez
+
+[/tracez](http://127.0.0.1:7777/debug/tracez) serves details about
+the trace spans collected in the process. It provides several sample spans
+per latency bucket and sample errored spans.
+
+An example screenshot from this page is below:
+
+
diff --git a/content/cpp.md b/content/cpp.md
deleted file mode 100644
index fd47a2f6..00000000
--- a/content/cpp.md
+++ /dev/null
@@ -1,111 +0,0 @@
-+++
-title = "C++"
-Description = "C++"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "leftnav"
-date = "2018-05-16T12:02:16-05:00"
-+++
-
-Build and run the example:
-
-```cpp
-git clone https://github.com/census-instrumentation/opencensus-cpp.git
-
-cd opencensus-cpp
-bazel build //examples/helloworld:helloworld
-bazel-bin/opencensus/examples/helloworld/hello_world
-```
-
----
-
-#### Quickstart Example
-
-The example demonstrates how to record stats and traces for a video processing system. It records data with the “frontend” tag so that collected data can be broken by the frontend user who initiated the video processing.
-
-In this case we are using stdout exporters which we register at the beginning.
-
-```cpp
-// Register stdout exporters.
-opencensus::exporters::stats::StdoutExporter::Register();
-opencensus::exporters::trace::StdoutExporter::Register();
-```
-
-We define a measure for video size which records the sizes in megabytes "MBy".
-
-```cpp
-// Call measure so that it is initialized.
-VideoSizeMeasure();
-```
-
-We create a view and register it with the local Stdout exporter.
-
-``` cpp
-// Create view to see the processed video size distribution broken down
-// by frontend. The view has bucket boundaries (0, 256, 65536)
-//that will group measure values into histogram buckets.
-constopencensus::stats::ViewDescriptor video_size_view =
-opencensus::stats::ViewDescriptor()
- .set_name(kVideoSizeViewName)
- .set_description("processed video size over time")
- .set_measure(kVideoSizeMeasureName)
- .set_aggregation(opencensus::stats::Aggregation::Distribution
- (opencensus::stats::BucketBoundaries::Exponential(2, 256, 256)))
- .add_column(kFrontendKey);
-
-// Create the view.
-opencensus::stats::View view(video_size_view);
-
-// Register the view for export.
-video_size_view.RegisterForExport();
-```
-
-Example View
-
-```
-name: "my.org/views/video_size"
-measure: name: "my.org/measure/video_size"; units: "MBy";
-description: "size of processed videos"; type: int64
-aggregation: Distribution with Buckets:
- 0,256,512,1024,2048,4096,8192,16384,32768
-aggregation window: Cumulative
-columns: my.org/keys/frontend
-description: "processed video size over time"
-video size : count: 1 mean: 25648 sum of squared deviation: 0 min: 25648 max: 25648
-histogram counts: 0, 0, 0, 0, 0, 0, 0, 0, 1, 0
-```
-
-In this case the view stores a distribution. The example records 1 video size to the view, which is 25648. This shows up in the histogram, with 1 bucket having a single value in it.
-
-```cpp
-opencensus::stats::Record({{VideoSizeMeasure(), 25648}},{{kFrontendKey, "video size"}});
-```
-
-Example Span
-
-```
-Name: my.org/ProcessVideo
-TraceId-SpanId-Options:
- a17625c6ed57d878092ea01fe87ded35-e9ec94e4de02fadb-01
-Parent SpanId: 0000000000000000 (remote: false)
-Start time: 2018-03-04T22:42:54.492839757-08:00
-End time: 2018-03-04T22:42:54.500995971-08:00
-Attributes: (0 dropped)
-Annotations: (0 dropped)
- 2018-03-04T22:42:54.492858312-08:00: Start processing video.
- 2018-03-04T22:42:54.500992796-08:00: Finished processing video.
-Message events: (0 dropped)
-Links: (0 dropped)
-Span ended: true
-Status: OK
-```
-
-Span context information is displayed in hexadecimal on a single line which is the concatenation of TraceId, SpanId, and span options. Parent SpanId is displayed on the following line. In this case there is no parent (root span), so the parent id is 0. There were 2 attributes added. After work has been completed a span must be ended by the user. A span that is still active (i.e. not ended), will not be exported.
-
-```cpp
-span.AddAnnotation("Start processing video.");
-...
-span.AddAnnotation("Finished processing video.");
-span.End();
-```
diff --git a/content/docs.md b/content/docs.md
deleted file mode 100644
index 46118395..00000000
--- a/content/docs.md
+++ /dev/null
@@ -1,156 +0,0 @@
-+++
-title = "Documentation"
-+++
-
-Welcome to the developer documentation for OpenCensus.
-Learn about key OpenCensus concepts, find library docs,
-reference material, and tutorials.
-
-OpenCensus terminology is explained at the [glossary](/glossary).
-
----
-
-## Concepts
-
-* [Overview](/overview)
-* [Tags](/tags)
-* [Stats](/stats)
-* [Exporters](/exporters)
-* [Z-Pages](/zpages)
-
----
-
-## Libraries
-
-
-
-
-
-
----
-
-## Use Cases
-
-* [Cloud Spanner - Instrumented by OpenCensus and exported to Stackdriver][1]
-* [OpenCensus for gRPC Go][2]
-
-
-[1]: https://medium.com/@orijtech/cloud-spanner-instrumented-by-opencensus-and-exported-to-stackdriver-6ed61ed6ab4e
-[2]: /gogrpc
diff --git a/content/erlang.md b/content/erlang.md
deleted file mode 100644
index c32fb03e..00000000
--- a/content/erlang.md
+++ /dev/null
@@ -1,144 +0,0 @@
-+++
-Description = "erlang"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "leftnav"
-
-title = "Erlang"
-date = "2018-05-17T15:41:51-05:00"
-+++
-
-
-The example demonstrates how to record stats and traces for a video processing system. It records data with the "frontend" tag so that collected data can be broken by the frontend user who initiated the video processing. Code for this example can be found under the `examples/helloworld` directory of the [OpenCensus Erlang repo](https://github.com/census-instrumentation/opencensus-erlang).
-
----
-
-#### API Documentation
-
-The OpenCensus Erlang API artifact is available [here](https://hexdocs.pm/opencensus/0.3.0/index.html).
-
----
-
-#### Example
-
-**Prerequisite**
-[Erlang/OTP 20](http://www.erlang.org/) and [rebar3](http://www.rebar3.org/) are required.
-
-**Using**
-Create a new Erlang application with `rebar3 new` named `helloworld`:
-
-```erlang
-$ rebar3 new app helloworld
-===> Writing helloworld/src/helloworld_app.erl
-===> Writing helloworld/src/helloworld_sup.erl
-===> Writing helloworld/src/helloworld.app.src
-===> Writing helloworld/rebar.config
-===> Writing helloworld/.gitignore
-===> Writing helloworld/LICENSE
-===> Writing helloworld/README.md
-$ cd helloworld
-```
-
-Add `opencensus` as a dependency in `rebar.config`. For development purposes it is also useful to include the `shell` section of the config which tells rebar3 to boot the application and load configuration when running `rebar3 shell`:
-
-```erlang
-{erl_opts, [debug_info]}.
-{deps, [opencensus]}.
-
-{shell, [{apps, [helloworld]},
- {config, "config/sys.config"}]}.
-```
-
-```erlang
-[
-{opencensus, [{sampler, {oc_sampler_always, []}},
- {reporter, {oc_reporter_stdout, []}},
- {stat, [{exporters, [{oc_stat_exporter_stdout, []}]}]}
-].
-```
-
-`opencensus` is a runtime dependency so it is added to the applications list in `src/helloworld.app.src`, ensuring it is included and started in a release of `helloworld`:
-
-```erlang
-{application, helloworld,
- [{description, "Example OpenCensus application"},
- {vsn, "0.1.0"},
- {registered, []},
- {mod, { helloworld_app, []}},
- {applications,
- [kernel,
- stdlib,
- opencensus
- ]},
- {env,[]},
- {modules, []},
-
- {maintainers, []},
- {licenses, ["Apache 2.0"]},
- {links, []}
-]}.
-```
-
-Building the application with `rebar3 compile` will fetch the OpenCensus Erlang library and its dependencies.
-
-When our application starts it needs to create and subscribe to the statistics that we'll record. So a call to `subscribe_views/0` is added to the application start function, `helloworld_app:start/2`:
-
-```erlang
-subscribe_views() ->
- oc_stat_view:subscribe(#{name => "video_size",
- description => "size of processed videos",
- tags => ['frontend'],
- measure => 'my.org/measure/video_size',
- aggregation => default_size_distribution()}).
-
-default_size_distribution() ->
- {oc_stat_aggregation_distribution,
- [{buckets, [0, 1 bsl 16, 1 bsl 32]}]}.
-```
-
-The main module called to actually do the video processing is `helloworld`. It creates a tag for who made the process request to include with the record statistic and creates a span for the duration of the video processing (a random sleep between 0 and 10 seconds):
-
-```erlang
--module(helloworld).
-
--export([process/0]).
-
-process() ->
- %% create a tag for who sent the request and start a child span
- Tags = oc_tags:new(#{'frontend' => "mobile-ios9.3.5"}),
- ocp:with_child_span(<<"my.org/ProcessVideo">>),
-
- %% sleep for 0-10 seconds to simulate processing time
- timer:sleep(timer:seconds(rand:uniform(10))),
-
- %% finish the span
- ocp:finish_span(),
-
- %% record the size of the video
- oc_stat:record(Tags, 'my.org/measure/video_size', 25648).
-```
-
-Run the application with `rebar3 shell` and see the stats and span reported to the console:
-
-
-```erlang
-$ rebar3 shell
-…
-===> Booted opencensus
-===> Booted helloworld
-> helloworld:process().
-ok
-{span,<<“my.org/ProcessVideo”>>,1201374966367397737078249396493886473,
- 10421649746227310879,undefined,1,
- {-576460748652616660,2097430124176280981},
- {-576460740651723247,2097430124176280981},
- #{},undefined,[],[],undefined,undefined,undefined}
-video_size: #{rows =>
- [#{tags => #{“frontend” => “mobile-ios9.3.5”},
- value =>
- #{buckets =>
- [{0,0},{65536,1},{4294967296,0},{infinity,0}],
- count => 1,mean => 25648.0,sum => 25648}}],
- type => distribution}
-```
diff --git a/content/exporters.md b/content/exporters.md
deleted file mode 100644
index 82cb34cb..00000000
--- a/content/exporters.md
+++ /dev/null
@@ -1,97 +0,0 @@
-+++
-title = "Exporters"
-type = "leftnav"
-+++
-
-Data collected by OpenCensus can be exported to any analysis tool or storage backend.
-OpenCensus exporters can be contributed by anyone, and we provide support for several
-open source backends and vendors out-of-the-box.
-
-Once you choose your backend, follow the instructions to initialize an exporter.
-Then, register the initialized exporter.
-
-## Stats
-
-As an example, a Prometheus exporter is registered and Prometheus is going to scrape
-`:9091` to read the collected data:
-
-{{% snippets %}}
-{{% go %}}
-``` go
-import (
- "go.opencensus.io/exporter/prometheus"
- "go.opencensus.io/stats/view"
-)
-
-exporter, err := prometheus.NewExporter(prometheus.Options{})
-if err != nil {
- log.Fatal(err)
-}
-view.RegisterExporter(exporter)
-
-http.Handle("/metrics", exporter)
-log.Fatal(http.ListenAndServe(":9091", nil))
-```
-{{% /go %}}
-{{% java %}}
-```
-// Add the dependencies by following the instructions at
-// https://github.com/census-instrumentation/opencensus-java/tree/master/exporters/stats/prometheus.
-
-PrometheusStatsCollector.createAndRegister();
-
-// Uses a simple Prometheus HTTPServer to export metrics.
-io.prometheus.client.exporter.HTTPServer server =
- new HTTPServer("localhost", 9091, true);
-```
-{{% /java %}}
-{{% /snippets %}}
-
-## Traces
-
-As an example, a Zipkin exporter is registered. All collected trace data will be reported
-to the registered Zipkin endpoint:
-
-{{% snippets %}}
-{{% go %}}
-```
-import (
- openzipkin "github.com/openzipkin/zipkin-go"
- "github.com/openzipkin/zipkin-go/reporter/http"
- "go.opencensus.io/exporter/zipkin"
- "go.opencensus.io/trace"
-)
-
-localEndpoint, err := openzipkin.NewEndpoint("example-server", "192.168.1.5:5454")
-if err != nil {
- log.Println(err)
-}
-reporter := http.NewReporter("http://localhost:9411/api/v2/spans")
-defer reporter.Close()
-
-exporter := zipkin.NewExporter(reporter, localEndpoint)
-trace.RegisterExporter(exporter)
-```
-{{% /go %}}
-{{% java %}}
-```
-// Add the dependencies by following the instructions
-// at https://github.com/census-instrumentation/opencensus-java/tree/master/exporters/trace/zipkin.
-
-ZipkinTraceExporter.createAndRegister(
- "http://localhost:9411/api/v2/spans", "example-server");
-```
-{{% /java %}}
-{{% /snippets %}}
-
-Exporters can be registered dynamically and unregistered. But most users will register
-an exporter in their main application and never unregister it.
-
-Libraries instrumented with OpenCensus should not register exporters. Exporters should
-only be registered by main applications.
-
-## Supported Backends
-
-Exporter support in each language for each backend:
-
-{{< sc_supportedExporters />}}
diff --git a/content/faq.md b/content/faq.md
deleted file mode 100644
index 65e72f6f..00000000
--- a/content/faq.md
+++ /dev/null
@@ -1,61 +0,0 @@
-+++
-title = "FAQ"
-Description = "faq"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-date = "2018-05-10T14:14:33-05:00"
-+++
-
-#### Who is behind OpenCensus?
-
-OpenCensus originates from Google, where a set of libraries called Census were used to automatically capture traces and metrics from services. Since going open source, the project is now composed of a group of cloud providers, application performance management vendors, and open source contributors. The project is hosted in [GitHub](https://github.com/census-instrumentation/) and all work occurs there.
-
----
-
-#### How does OpenCensus benefit the ecosystem?
-
-* Making application metrics and distributed traces more accessible to developers. Today, one of the biggest challenges with gathering this information is the lack of good automatic instrumentation, as tracing and APM vendors have typically supplied their own limited, incompatible instrumentation solutions. With OpenCensus, more developers will be able to use these tools, which will improve the overall quality of their services and the web at large.
-* APM vendors will benefit from less setup friction for customers, broader language and framework coverage, and reduced effort spent in designing and maintaining their own instrumentation.
-* Local debugging capabilities. OpenCensus’s optional agent can be used to view requests and metrics locally and can dynamically change the sampling rate of traces, both of which are incredibly useful during critical production debugging sessions.
-* Collaboration and support from vendors (cloud providers like Google and Microsoft in addition to APM companies) and open source providers (Zipkin). As the OpenCensus libraries include instrumentation hooks into various web and RPC frameworks and exporters, they are immediately useful out of the box.
-* Allowing service providers to better debug customer issues. As OpenCensus defines a common context propagation format, customers experiencing issues can provide a request ID to providers so that they can debug the problem together. Ideally, providers can trace the same requests as customers, even if they are using different analysis systems.
-
----
-
-#### What languages & integrations does OpenCensus support?
-
-{{< sc_supportedLanguages />}}
-
----
-
-#### What Exporters does OpenCensus support?
-
-{{< sc_supportedExporters />}}
-
----
-
-#### How do I use OpenCensus in my application?
-If you are using a supported application framework, follow its instructions for configuring OpenCensus.
-
-* Choose a supported APM tool and follow its configuration instructions for using OpenCensus.
-* You can also use the OpenCensus z-Pages to view your tracing data without an APM tool.
-
-A user’s guide will be released as soon as possible.
-
----
-
-#### What are the z-Pages?
-
-OpenCensus provides a stand-alone application that uses a gRPC channel to communicate with the OpenCensus code linked into your application. The application displays configuration parameters and trace information in real time held in the OpenCensus library.
-
----
-
-#### How can I contribute to OpenCensus?
-
-* Help people on the discussion forums.
-* Tell us your success story using OpenCensus.
-* Tell us how we can improve OpenCensus, and help us do it.
-* Contribute to an existing library or create one for a new language.
-* Integrate OpenCensus with a new framework.
-* Integrate OpenCensus with a new APM tool.
diff --git a/content/faq/_index.md b/content/faq/_index.md
new file mode 100644
index 00000000..87cf5478
--- /dev/null
+++ b/content/faq/_index.md
@@ -0,0 +1,74 @@
+---
+title: "FAQ"
+date: 2018-07-16T14:46:09-07:00
+draft: false
+weight: 90
+class: "resized-logo"
+---
+
+##### Who is behind OpenCensus?
+
+OpenCensus originates from Google, where a set of libraries called Census were used to automatically
+capture traces and metrics from services. Since going open source, the project is now composed of a
+group of cloud providers, application performance management vendors, and open source contributors.
+
+The project is hosted on [GitHub @census-instrumentation](https://github.com/census-instrumentation/) and all work occurs there.
+
+
+##### How does OpenCensus benefit the ecosystem?
+
+* Making application metrics and distributed traces more accessible to developers.
+Today, one of the biggest challenges with gathering this information is the lack of good
+automatic instrumentation, as tracing and APM vendors have typically supplied their own limited,
+incompatible instrumentation solutions. With OpenCensus, more developers will be able to use these
+tools, which will improve the overall quality of their services and the web at large.
+
+* APM vendors will benefit from less setup friction for customers, broader language and framework coverage, and reduced effort spent in designing and maintaining their own instrumentation.
+
+* Local debugging capabilities. OpenCensus’s optional agent can be used to view requests and metrics locally and can dynamically change the sampling rate of traces, both of which are incredibly useful during critical production debugging sessions.
+
+* Collaboration and support from vendors (cloud providers like Google and Microsoft in addition to APM companies) and open source providers (Zipkin). As the OpenCensus libraries include instrumentation hooks into various web and RPC frameworks and exporters, they are immediately useful out of the box.
+
+* Allowing service providers to better debug customer issues. As OpenCensus defines a common context propagation format, customers experiencing issues can provide a request ID to providers so that they can debug the problem together. Ideally, providers can trace the same requests as customers, even if they are using different analysis systems.
+
+
+
+##### What languages & integrations does OpenCensus support?
+
+
+
+{{}}
+
+
+
+##### What Exporters does OpenCensus support?
+T Backend supports Tracing
+
+S Backend supports Stats
+
+{{}}
+
+
+
+##### How do I use OpenCensus in my application?
+If you are using a supported application framework, follow its instructions for configuring OpenCensus.
+
+* Choose a supported APM tool and follow its configuration instructions for using OpenCensus.
+* You can also use the OpenCensus z-Pages to view your tracing data without an APM tool.
+
+##### What are the zPages?
+
+OpenCensus provides a stand-alone application that uses a gRPC channel to communicate with the OpenCensus code linked into your application. The application displays configuration parameters and trace information in real time held in the OpenCensus library.
+
+You can learn more by visiting [zPages](/core-concepts/z-pages/)
+
+
+
+##### How can I contribute to OpenCensus?
+
+* Help people on the discussion forums.
+* Tell us your success story using OpenCensus.
+* Tell us how we can improve OpenCensus, and help us do it.
+* Contribute to an existing library or create one for a new language.
+* Integrate OpenCensus with a new framework.
+* Integrate OpenCensus with a new APM tool.
diff --git a/content/glossary.md b/content/glossary.md
deleted file mode 100644
index 83eb0a08..00000000
--- a/content/glossary.md
+++ /dev/null
@@ -1,115 +0,0 @@
-+++
-title = "Glossary"
-Description = "glossary"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "glossary"
-date = "2018-05-14T16:11:08-05:00"
-+++
-
-**Tracing**
-
-Distributed Tracing tracks the progression of a request as it is handled by the services and processes that make up a distributed system. Each step in the Trace is called a Span. Each Span contains metadata, such as the length of time spent in the step, and optionally annotations about the work being performed. This information can be used for debugging and performance tuning of distributed systems.
-
----
-
-**Span**
-
-A unit work in a trace. Each span might have children spans within. If a span does not have a parent, it is called a root span.
-
----
-
-**Root Span**
-The first span in a trace; this span does not have a parent.
-
----
-
-**Child Span**
-A span with a parent span; it represents a unit of work that originates from an existing span.
-A span MAY or MAY NOT be a child and spans can coexist as siblings on the same level.
-
----
-
-**Trace**
-A tree of spans. A trace has a root span that encapsulates all the spans from start to end, and the children spans being the distinct calls invoked in between.
-
----
-
-**Sampling**
-
-Sampling determines how often requests will be traced.
-There are three types of sampling made available from OpenCensus libraries:
-
-* **Always sample:** With this sampling, every single request is traced.
-* **Probabilisticly sample:** A probability is set (e.g 0.0001) and the libraries will sample according to that probability (e.g. 1 in 10000 requests).
-* **Never sample:** No request is traced.
-
----
-
-**Measure**
-Any quantifiable metric. Examples of measures are number of requests, number of failed responses, bytes downloaded, cache misses, number of transactions, etc.
-
----
-
-**Aggregation**
-
-OpenCensus doesn't export each collected measurement. It preaggregates measurements in the process.
-OpenCensus provides the following aggregations:
-
-* **Count:** Reports the count of the recorded data points. For example, number of requests.
-* **Distribution:** Reports the histogram distribution of the recorded measurements.
-* **Sum:** Reports the sum of the recorded measurements.
-* **Last Value:** Reports on the last recorded measurement and drops everything else.
-
----
-
-**Tags**
-
-Tags are key-value pairs that can be recorded with measurements.
-They are later used to breakdown the collected data in the metric collection backend.
-Tags should be designed to meet the data querying requirements. Examples of tags:
-
-* ip=10.32.103.12
-* version=1.23
-* frontend=ios-10.3.1
-
----
-
-**Views**
-
-A view manages the aggregation of a measure and exports the collected data.
-Recordings on the measures won't be exported until a view is registered to
-collect from the measure.
-
----
-
-**View Data**
-
-View Data is the exported data from a view.
-It contains view information, aggregated data,
-start and end time of the collection.
-
----
-
-**Exporters**
-
-Exporters allow for metrics and traces collected by
-OpenCensus to be uploaded to the other services.
-Various providers have OpenCensus exporters, examples are
-Stackdriver Monitoring and Tracing, Prometheus, SignalFX, Jaeger.
-
----
-
-**Context Propagation**
-
-The mechanism to transmit identifiers or metadata on the wire
-and in-process. For example, trace span identifier and stats tags
-can be propagated in the process and among multiple processors.
-
----
-
-**RPC**
-
-Remote Procedure Call. The mechanism of invoking a procedure/subroutine in a different scope/address space.
-
----
\ No newline at end of file
diff --git a/content/go.md b/content/go.md
deleted file mode 100644
index 9002e7ae..00000000
--- a/content/go.md
+++ /dev/null
@@ -1,3 +0,0 @@
-
-
-See the [OpenCensus Go](https://github.com/census-instrumentation/opencensus-go) repo.
diff --git a/content/gogrpc.md b/content/gogrpc.md
deleted file mode 100644
index 1737ac8a..00000000
--- a/content/gogrpc.md
+++ /dev/null
@@ -1,669 +0,0 @@
-+++
-Description = "go grpc"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-title = "OpenCensus for Go gRPC"
-date = "2018-05-31T11:12:41-05:00"
-+++
-
-In this tutorial, we’ll examine how to use OpenCensus in your gRPC projects in the Go programming language for observability both into your server and then client! We’ll then examine how we can integrate with OpenCensus exporters from AWS X-Ray, Prometheus, Zipkin and Google Stackdriver Tracing and Monitoring.
-
----
-
-gRPC is a modern high performance framework for remote procedure calls, powered by Protocol Buffer encoding. It is polyglot in nature, accessible and useable on a variety of environments ranging from mobile mobile devices, general purpose computers to data centres for distributed computing, and it is implemented in a variety of languages: Go, Java, Python, C/C++, Node.js, Ruby, PHP. See [gRPC homepage](https://grpc.io/).
-
-OpenCensus is a modern observability framework for distributed tracing and monitoring across microservices and monoliths alike. It is polyglot in nature, accessible and useable too on a variety of environments from mobile devices, general purpose computers and data centres for distributed computing and it is implemented in a plethora of languages: Go, Java, Python, C++, Node.js, Ruby, PHP, C# (coming soon).
-
-Go is a modern programming language that powers the cloud as well as modern systems programming, making it easy to build simple, reliable and efficient software. It is a cross platform, fast, statically typed and a simple language. See [golang.org](https://golang.org).
-
-With the above three introductory paragraphs, perhaps you already noticed the common themes: high performance, distributed computing, modern nature, cross platform, simplicity, reliability — those points make the three a match #compatibility, hence the motivation for this tutorial/article.
-
----
-
-For this tutorial, we have a company’s service that’s in charge of capitalizing letters sent in from various clients and internal microservices using gRPC.
-
-To use gRPC, we firstly need to create Protocol Buffer definitions and from those, use the Protocol Buffer compiler with the gRPC plugin to generate code stubs. If you need to take a look at the pre-requisites or a primer into gRPC, please check out the docs at [grpc.io/docs/](https://grpc.io/docs/).
-
-Our service takes in a payload with bytes, and then capitalizes them on the server.
-
-```
-syntax = "proto3";
-
-package rpc;
-
-message Payload {
- int32 id = 1;
- bytes data = 2;
-}
-
-service Fetch {
- rpc Capitalize(Payload) returns (Payload) {}
-}
-```
-
-{{< sc_center >}}Payload Message and Fetch service{{< /sc_center >}}
-
-To generate code, we’ll firstly put our definition in a file called “defs.proto” and move it into our “rpc” directory and then run this command to generate gRPC code stubs in Go, using this Makefile below:
-
-```
-protoc:
-protoc -I rpc rpc/defs.proto --go_out=plugins=grpc:rpc
-```
-
-{{< sc_center >}}Makefile{{< /sc_center >}}
-
-`make` should then generate code that’ll make the directory structure look like this
-
-```
-|-rpc/
- |-defs.proto
- |-defs.pb.go
-```
-After the code generation, we now need to add the business logic into the server
-
----
-
-### Plain Server
-
-Our server’s sole purpose is to capitalize content sent in and send it back to the client. With gRPC, as previously mentioned, the protoc plugin generated code for a server interface. This allows you create your own custom logic of operation, as we shall do below with a custom object that implements the `Capitalize`method.
-
-```
-package main
-
-import (
- "bytes"
- "context"
- "log"
- "net"
-
- "google.golang.org/grpc"
-
- "./rpc"
-)
-
-type fetchIt int
-
-// Compile time assertion that fetchIt implements FetchServer.
-var _ rpc.FetchServer = (*fetchIt)(nil)
-
-func (fi *fetchIt) Capitalize(ctx context.Context, in *rpc.Payload) (*rpc.Payload, error) {
- out := &rpc.Payload{
- Data: bytes.ToUpper(in.Data),
- }
- return out, nil
-}
-
-func main() {
- addr := ":9988"
- ln, err := net.Listen("tcp", addr)
- if err != nil {
- log.Fatalf("gRPC server: failed to listen: %v", err)
- }
- srv := grpc.NewServer()
- rpc.RegisterFetchServer(srv, new(fetchIt))
- log.Printf("fetchIt gRPC server serving at %q", addr)
- if err := srv.Serve(ln); err != nil {
- log.Fatalf("gRPC server: error serving: %v", err)
- }
-}
-```
-
-{{< sc_center >}}server.go{{< /sc_center >}}
-
-With that, we can now monetize access to generate money $$$. In order to accomplish that though, we need to create clients that speak gRPC and for that please see below:
-
----
-
-### Plain Client
-
-Our client makes a request to the gRPC server above, sending content that then gets capitalized and printed to our screen. It is interactive and can be run simply by `go run client.go`.
-
-```
-package main
-
-import (
- "bufio"
- "context"
- "fmt"
- "log"
- "os"
-
- "google.golang.org/grpc"
-
- "./rpc"
-)
-
-func main() {
- serverAddr := ":9988"
- cc, err := grpc.Dial(serverAddr, grpc.WithInsecure())
- if err != nil {
- log.Fatalf("fetchIt gRPC client failed to dial to server: %v", err)
- }
- fc := rpc.NewFetchClient(cc)
-
- fIn := bufio.NewReader(os.Stdin)
- for {
- fmt.Print("> ")
- line, _, err := fIn.ReadLine()
- if err != nil {
- log.Fatalf("Failed to read a line in: %v", err)
- }
-
- ctx := context.Background()
- out, err := fc.Capitalize(ctx, &rpc.Payload{Data: line})
- if err != nil {
- log.Printf("fetchIt gRPC client got error from server: %v", err)
- continue
- }
- fmt.Printf("< %s\n\n", out.Data)
- }
-}
-```
-
-which when run interactively, will look like this
-
-
-{{< sc_center >}}interactive response from the client{{< /sc_center >}}
-
-And now that we have a client, we are open for business!!
-
----
-
-### Aftermath
-
-It’s been 1 hour since launch. Tech blogs and other programmers are sharing news of our service all over their internet and social media; our service just got so popular and is being talked about all around the business world too, high fives are shared and congrats shared — after this celebration, we all go back home and call it a night. It’s the latest and greatest API in the world, it is off the charts, customers from all over the world come in, what could go wrong?
-
-It hits 3AM and our servers start getting over loaded. Response time degrades overall for everyone. This however is only noticed after one of the engineers tried to give a demo to their family that they restlessly awoke at 2:00AM due to excitement, but the service is taking 15ms to give back a response. In normal usage, we saw about at most 1ms response time. What is causing the sluggishness of the system? When did our service start getting slow? What is the solution? Throw more servers at it? How many servers should we throw at it? How do we know what is going wrong? When? How can the engineering and business teams figure out what to optimize or budget for? How can we tell we’ve successfully optimized the system and removed bottlenecks?
-
-In comes in OpenCensus: OpenCensus is a single distribution of libraries for distributed tracing and monitoring for modern and distributed systems. OpenCensus can help answer mostly all of those questions that we asked. By “mostly”, I mean that it can answer the observability related questions such as: When did the latency increase? Why? How did it increase? By how much? What part of the system is the slowest? How can we optimize and assert successful changes?
-
-OpenCensus is simple to integrate and use, it adds very low latency to your applications and it is already integrated into both gRPC and HTTP transports.
-
-OpenCensus allows you to trace and measure once and then export to a variety of backends like Prometheus, AWS X-Ray, Stackdriver Tracing and Monitoring, Jaeger, Zipkin etc. With that mentioned, let’s get started.
-
-### Part 1: observability by instrumenting the server
-
-To collect statistics from gRPC servers, OpenCensus is already integrated with gRPC out of the box, and one just has to import `go.opencensus.io/plugin/ocgrpc`. And then also subscribe to the gRPC server views. This amounts to a 7 line change
-
-```
-10a11,13
-> "go.opencensus.io/plugin/ocgrpc"
-> "go.opencensus.io/stats/view"
->
-32c35,38
-< srv := grpc.NewServer()
----
-> if err := view.Register(ocgrpc.DefaultServerViews...); err != nil {
-> log.Fatalf("Failed to register gRPC server views: %v", err)
-> }
-> srv := grpc.NewServer(grpc.StatsHandler(new(ocgrpc.ServerHandler)))
-```
-
-and then to trace the application, we’ll start a span on entering the function, then end it on exiting. This amounts to a 7 line change too
-
-```
-12a13
-> "go.opencensus.io/trace"
-22a24,29
-> _, span := trace.StartSpan(ctx, "(*fetchIt).Capitalize")
-> defer span.End()
->
-> span.Annotate([]trace.Attribute{
-> trace.Int64Attribute("len", int64(len(in.Data))),
-> }, "Data in")
-```
-
-In the tracing, notice the `trace.StartSpan(ctx, "(*fetchIt).Capitalize")`? We take a `context.Context`as the first argument, to use context propagation which carries over RPC specific information about a request to uniquely identify it.
-
-### How do we examine that “observability”?
-
-
-Now that we’ve got tracing and monitoring in, let’s export that data out. Earlier on, I made claims that with OpenCensus you collect and trace once, then export to a variety of backends, simulatenously. Well, it is time for me to walk that talk!
-
-To do that, we’ll need to use the exporter integrations in our app to send data to our favorite backends: AWS X-Ray, Prometheus, Stackdriver Tracing and Monitoring
-
-```
-7a8
-> "net/http"
-10a12,14
-> xray "github.com/census-instrumentation/opencensus-go-exporter-aws"
-> "go.opencensus.io/exporter/prometheus"
-> "go.opencensus.io/exporter/stackdriver"
-12a17
-> "go.opencensus.io/trace"
-22a28,33
-> _, span := trace.StartSpan(ctx, "(*fetchIt).Capitalize")
-> defer span.End()
->
-> span.Annotate([]trace.Attribute{
-> trace.Int64Attribute("len", int64(len(in.Data))),
-> }, "Data in")
-40a52,56
->
-> // OpenCensus exporters
-> createAndRegisterExporters()
->
-> // Finally serve
-44a61,97
->
-> func createAndRegisterExporters() {
-> // For demo purposes, set this to always sample.
-> trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
-> // 1. Prometheus
-> prefix := "fetchit"
-> pe, err := prometheus.NewExporter(prometheus.Options{
-> Namespace: prefix,
-> })
-> if err != nil {
-> log.Fatalf("Failed to create Prometheus exporter: %v", err)
-> }
-> view.RegisterExporter(pe)
-> // We need to expose the Prometheus collector via an endpoint /metrics
-> go func() {
-> mux := http.NewServeMux()
-> mux.Handle("/metrics", pe)
-> log.Fatal(http.ListenAndServe(":9888", mux))
-> }()
->
-> // 2. AWS X-Ray
-> xe, err := xray.NewExporter(xray.WithVersion("latest"))
-> if err != nil {
-> log.Fatalf("Failed to create AWS X-Ray exporter: %v", err)
-> }
-> trace.RegisterExporter(xe)
->
-> // 3. Stackdriver Tracing and Monitoring
-> se, err := stackdriver.NewExporter(stackdriver.Options{
-> MetricPrefix: prefix,
-> })
-> if err != nil {
-> log.Fatalf("Failed to create Stackdriver exporter: %v", err)
-> }
-> view.RegisterExporter(se)
-> trace.RegisterExporter(se)
-> }
-```
-
-to finally give this code
-
-```
-package main
-
-import (
- "bytes"
- "context"
- "log"
- "net"
- "net/http"
-
- "google.golang.org/grpc"
-
- xray "github.com/census-instrumentation/opencensus-go-exporter-aws"
- "go.opencensus.io/exporter/prometheus"
- "go.opencensus.io/exporter/stackdriver"
- "go.opencensus.io/plugin/ocgrpc"
- "go.opencensus.io/stats/view"
- "go.opencensus.io/trace"
-
- "./rpc"
-)
-
-type fetchIt int
-
-// Compile time assertion that fetchIt implements FetchServer.
-var _ rpc.FetchServer = (*fetchIt)(nil)
-
-func (fi *fetchIt) Capitalize(ctx context.Context, in *rpc.Payload) (*rpc.Payload, error) {
- _, span := trace.StartSpan(ctx, "(*fetchIt).Capitalize")
- defer span.End()
-
- span.Annotate([]trace.Attribute{
- trace.Int64Attribute("len", int64(len(in.Data))),
- }, "Data in")
- out := &rpc.Payload{
- Data: bytes.ToUpper(in.Data),
- }
- return out, nil
-}
-
-func main() {
- addr := ":9988"
- ln, err := net.Listen("tcp", addr)
- if err != nil {
- log.Fatalf("gRPC server: failed to listen: %v", err)
- }
- if err := view.Register(ocgrpc.DefaultServerViews...); err != nil {
- log.Fatalf("Failed to register gRPC server views: %v", err)
- }
- srv := grpc.NewServer(grpc.StatsHandler(new(ocgrpc.ServerHandler)))
- rpc.RegisterFetchServer(srv, new(fetchIt))
- log.Printf("fetchIt gRPC server serving at %q", addr)
-
- // OpenCensus exporters
- createAndRegisterExporters()
-
- // Finally serve
- if err := srv.Serve(ln); err != nil {
- log.Fatalf("gRPC server: error serving: %v", err)
- }
-}
-
-func createAndRegisterExporters() {
- // For demo purposes, set this to always sample.
- trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
- // 1. Prometheus
- prefix := "fetchit"
- pe, err := prometheus.NewExporter(prometheus.Options{
- Namespace: prefix,
- })
- if err != nil {
- log.Fatalf("Failed to create Prometheus exporter: %v", err)
- }
- view.RegisterExporter(pe)
- // We need to expose the Prometheus collector via an endpoint /metrics
- go func() {
- mux := http.NewServeMux()
- mux.Handle("/metrics", pe)
- log.Fatal(http.ListenAndServe(":9888", mux))
- }()
-
- // 2. AWS X-Ray
- xe, err := xray.NewExporter(xray.WithVersion("latest"))
- if err != nil {
- log.Fatalf("Failed to create AWS X-Ray exporter: %v", err)
- }
- trace.RegisterExporter(xe)
-
- // 3. Stackdriver Tracing and Monitoring
- se, err := stackdriver.NewExporter(stackdriver.Options{
- MetricPrefix: prefix,
- })
- if err != nil {
- log.Fatalf("Failed to create Stackdriver exporter: %v", err)
- }
- view.RegisterExporter(se)
- trace.RegisterExporter(se)
-}
-```
-
-{{< sc_center >}}OpenCensus instrumented server.go{{< /sc_center >}}
-
-
-and with the following variables set in our environment
-
-`AWS_REGION=region`
-
-`AWS_ACCESS_KEY_ID=keyID`
-
-`AWS_SECRET_ACCESS_KEY=key`
-
-`GOOGLE_APPLICATION_CREDENTIALS=credentials.json`
-
-as well as our prometheus.yml file
-
-```
-global:
- scrape_interval: 10s
-
- external_labels:
- monitor: 'media_search'
-
-scrape_configs:
- - job_name: 'media_search'
-
- scrape_interval: 10s
-
- static_configs:
- - targets: ['localhost:9888', 'localhost:9988', 'localhost:9989']
-```
-
-`prometheus --config.file=prometheus.yml`
-
-`go run server.go`
-
-`2018/05/12 11:40:17 fetchIt gRPC server serving at ":9988"`
-
-### Monitoring results
-
-
-{{< sc_center >}}Prometheus latency bucket examinations{{< /sc_center >}}
-
-
-{{< sc_center >}}Prometheus completed_rpcs examination{{< /sc_center >}}
-
-
-{{< sc_center >}}Prometheus sent_bytes_per_rpc_bucket examination{{< /sc_center >}}
-
-
-{{< sc_center >}}Stackdriver Monitoring completed_rpcs examination{{< /sc_center >}}
-
-
-{{< sc_center >}}Stackdriver Monitoring server_latency examination{{< /sc_center >}}
-
-### Tracing results
-
-
-{{< sc_center >}}Common case: low latency on the server{{< /sc_center >}}
-
-
-{{< sc_center >}}Postulation: pathological case of inbound network congestion{{< /sc_center >}}
-
-
-{{< sc_center >}}Postulation: pathological case of outbound network congestion{{< /sc_center >}}
-
-
-{{< sc_center >}}Stackdriver Trace — common case, fast response, low latency{{< /sc_center >}}
-
-
-{{< sc_center >}}Postulation: system overload on server hence long time for bytes.ToUpper to return{{< /sc_center >}}
-
-
-{{< sc_center >}}Postulation: outbound network congestion{{< /sc_center >}}
-
-
-{{< sc_center >}}Postulation: inbound network congestion{{< /sc_center >}}
-
----
-
-### Part 2: observability by instrumenting the client
-
-
-And then for client monitoring, we’ll just do the same thing for gRPC stats handler except using the ClientHandler and then also start and stop a trace and that’s it, collectively giving this diff below
-
-```
-7a8
-> "net/http"
-11a13,19
-> xray "github.com/census-instrumentation/opencensus-go-exporter-aws"
-> "go.opencensus.io/exporter/prometheus"
-> "go.opencensus.io/exporter/stackdriver"
-> "go.opencensus.io/plugin/ocgrpc"
-> "go.opencensus.io/stats/view"
-> "go.opencensus.io/trace"
->
-17c25
-< cc, err := grpc.Dial(serverAddr, grpc.WithInsecure())
----
-> cc, err := grpc.Dial(serverAddr, grpc.WithInsecure(), grpc.WithStatsHandler(new(ocgrpc.ClientHandler)))
-22a31,38
-> // OpenCensus exporters for the client since disjoint
-> // and your customers will usually want to have their
-> // own statistics too.
-> createAndRegisterExporters()
-> if err := view.Register(ocgrpc.DefaultClientViews...); err != nil {
-> log.Fatalf("Failed to register gRPC client views: %v", err)
-> }
->
-31c47
-< ctx := context.Background()
----
-> ctx, span := trace.StartSpan(context.Background(), "Client.Capitalize")
-32a49
-> span.End()
-39c56,93
-< }
-\ No newline at end of file
----
-> }
->
-> func createAndRegisterExporters() {
-> // For demo purposes, set this to always sample.
-> trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
-> // 1. Prometheus
-> prefix := "fetchit"
-> pe, err := prometheus.NewExporter(prometheus.Options{
-> Namespace: prefix,
-> })
-> if err != nil {
-> log.Fatalf("Failed to create Prometheus exporter: %v", err)
-> }
-> view.RegisterExporter(pe)
-> // We need to expose the Prometheus collector via an endpoint /metrics
-> go func() {
-> mux := http.NewServeMux()
-> mux.Handle("/metrics", pe)
-> log.Fatal(http.ListenAndServe(":9889", mux))
-> }()
->
-> // 2. AWS X-Ray
-> xe, err := xray.NewExporter(xray.WithVersion("latest"))
-> if err != nil {
-> log.Fatalf("Failed to create AWS X-Ray exporter: %v", err)
-> }
-> trace.RegisterExporter(xe)
->
-> // 3. Stackdriver Tracing and Monitoring
-> se, err := stackdriver.NewExporter(stackdriver.Options{
-> MetricPrefix: prefix,
-> })
-> if err != nil {
-> log.Fatalf("Failed to create Stackdriver exporter: %v", err)
-> }
-> view.RegisterExporter(se)
-> trace.RegisterExporter(se)
-> }
-```
-
-or this which now becomes this code
-
-```
-package main
-
-import (
- "bufio"
- "context"
- "fmt"
- "log"
- "net/http"
- "os"
-
- "google.golang.org/grpc"
-
- xray "github.com/census-instrumentation/opencensus-go-exporter-aws"
- "go.opencensus.io/exporter/prometheus"
- "go.opencensus.io/exporter/stackdriver"
- "go.opencensus.io/plugin/ocgrpc"
- "go.opencensus.io/stats/view"
- "go.opencensus.io/trace"
-
- "./rpc"
-)
-
-func main() {
- serverAddr := ":9988"
- cc, err := grpc.Dial(serverAddr, grpc.WithInsecure(), grpc.WithStatsHandler(new(ocgrpc.ClientHandler)))
- if err != nil {
- log.Fatalf("fetchIt gRPC client failed to dial to server: %v", err)
- }
- fc := rpc.NewFetchClient(cc)
-
- // OpenCensus exporters for the client since disjoint
- // and your customers will usually want to have their
- // own statistics too.
- createAndRegisterExporters()
- if err := view.Register(ocgrpc.DefaultClientViews...); err != nil {
- log.Fatalf("Failed to register gRPC client views: %v", err)
- }
-
- fIn := bufio.NewReader(os.Stdin)
- for {
- fmt.Print("> ")
- line, _, err := fIn.ReadLine()
- if err != nil {
- log.Fatalf("Failed to read a line in: %v", err)
- }
-
- ctx, span := trace.StartSpan(context.Background(), "Client.Capitalize")
- out, err := fc.Capitalize(ctx, &rpc.Payload{Data: line})
- span.End()
- if err != nil {
- log.Printf("fetchIt gRPC client got error from server: %v", err)
- continue
- }
- fmt.Printf("< %s\n\n", out.Data)
- }
-}
-
-func createAndRegisterExporters() {
- // For demo purposes, set this to always sample.
- trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
- // 1. Prometheus
- prefix := "fetchit"
- pe, err := prometheus.NewExporter(prometheus.Options{
- Namespace: prefix,
- })
- if err != nil {
- log.Fatalf("Failed to create Prometheus exporter: %v", err)
- }
- view.RegisterExporter(pe)
- // We need to expose the Prometheus collector via an endpoint /metrics
- go func() {
- mux := http.NewServeMux()
- mux.Handle("/metrics", pe)
- log.Fatal(http.ListenAndServe(":9889", mux))
- }()
-
- // 2. AWS X-Ray
- xe, err := xray.NewExporter(xray.WithVersion("latest"))
- if err != nil {
- log.Fatalf("Failed to create AWS X-Ray exporter: %v", err)
- }
- trace.RegisterExporter(xe)
-
- // 3. Stackdriver Tracing and Monitoring
- se, err := stackdriver.NewExporter(stackdriver.Options{
- MetricPrefix: prefix,
- })
- if err != nil {
- log.Fatalf("Failed to create Stackdriver exporter: %v", err)
- }
- view.RegisterExporter(se)
- trace.RegisterExporter(se)
-}
-```
-
-which gives this visualization
-
-
-
-
-
-
-
-
-
-
-
-Engineers can add alerts with Prometheus [https://prometheus.io/docs/alerting/overview/](https://prometheus.io/docs/alerting/overview/) or Stackdriver Monitoring [https://cloud.google.com/monitoring/alerts/](https://cloud.google.com/monitoring/alerts/) but also the various teams can examine system behaviour simultaneously, be it traces or metrics on a variety of backends. A question one might have is: “how about observability for streaming?” — for streaming you can use the same logic, but since in order to export a trace, the span needs to have been ended. However, with streaming, you have a single persistent connection that’s perhaps infinitely open. What you can do is register unique identifying information from a streaming request and then per stream response, start and end a span!
-
-
-With that we are off to the races!
-
-Thank you for reading this far and hope this tutorial was useful, you can find all the code in this tutorial at [https://github.com/orijtech/opencensus-for-grpc-go-developers](https://github.com/orijtech/opencensus-for-grpc-go-developers).
-
-Please feel free to check out the OpenCensus community https://opencensus.io send us feedback, instrument your backends and share with your friends and teams!
-
-This tutorial is part of a bunch more coming where we’ll use different languages, different transports and provide more samples etc.
-
-Emmanuel T Odeke
diff --git a/content/index.md b/content/index.md
deleted file mode 100644
index 8a7bf83c..00000000
--- a/content/index.md
+++ /dev/null
@@ -1,125 +0,0 @@
-+++
-Description = "index"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "index"
-title = "OpenCensus"
-date = "2018-05-22T11:02:24-05:00"
-+++
-
-{{% sc_index1 %}}
-
- Kudos to whoever came up with the design and architecture of OpenCensus. Tried it for the first time yesterday and it was a great experience for me as a monitoring developer. It feels a lot more thought out and robust than other solutions.
- ***- Ben Ripkens, Staff Software Engineer - Instana***
-
-
-
-{{% sc_index2 %}}
-#### What is OpenCensus?
-A single distribution of libraries that automatically collects traces and metrics from your app, displays them locally, and sends them to any analysis tool. See an [{{% sc_gloss1 %}}overview{{% /sc_gloss1 %}}](/overview).
-
-
-{{% /sc_index2 %}}
-
-{{% sc_index2 %}}
-#### Who is behind OpenCensus?
-OpenCensus originates from Google, where a set of libraries called Census were used to automatically capture traces and metrics from services. Since going open source, the project is now composed of a group of cloud providers, application performance management vendors, and open source contributors. The project is hosted in GitHub and all work occurs there.
-{{% /sc_index2 %}}
-
-{{% sc_index3 %}}
-#### How do I contribute?
-Contributions are highly appreciated! Please see our [{{% sc_gloss1 %}}community{{% /sc_gloss1 %}}](community) page to contribute.
-{{% /sc_index3 %}}
-{{% /sc_index1 %}}
-
-{{% sc_index4 %}}
-{{% sc_index5 %}}
-#### Key Features
-{{% /sc_index5 %}}
-
-{{% sc_index6 %}}
-
-### Wire protocols and consistent APIs
-Standard wire protocols and consistent APIs for handling trace and metric data.
-{{% /sc_index6 %}}
-
-{{% sc_index7 %}}
-
-### Single set of Libraries
-A single set of libraries for many languages, including Java, C++, Go. In progress - Python, PHP, Erlang, and Ruby.
-{{% /sc_index7 %}}
-
-{{% sc_index6 %}}
-
-### Integrations
-Included integrations with web and RPC frameworks, democratizing good tracing and metric collection.
-{{% /sc_index6 %}}
-
-{{% sc_index7 %}}
-
-### Exporters
-Included exporters for storage and analysis tools. Right now the list includes Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx.
-{{% /sc_index7 %}}
-
-{{% sc_index6 %}}
-
-### Open Source
-All the code is entirely open source, to easily add your own integrations and exporters.
-{{% /sc_index6 %}}
-
-{{% sc_index7 %}}
-
-### No Add-ons Needed
-No additional server or daemon is required to support OpenCensus.
-
-
-{{% /sc_index7 %}}
-
-{{% sc_index6 %}}
-
-### Optional Agent
-In process debugging: an optional agent for displaying request and metrics data on instrumented hosts.
-{{% /sc_index6 %}}
-
-{{% sc_index7 %}}
-
-
-**Get Involved** - Interested in developing for OpenCensus? Here are some ideas: [{{< sc_red >}}2018 Google Summer of Code{{< /sc_red >}}](https://storage.googleapis.com/summer-of-code/OpenCensusIdeasList.pdf)
-{{% /sc_index7 %}}
-{{% /sc_index4 %}}
-
-{{% sc_index8 %}}
-{{% sc_index9 %}}
-#### languages
-Go straight to the language of your choice:
-{{% /sc_index9 %}}
-
-
-
-
-
-
----
-
-{{% sc_index9 %}}
-#### Roadmap
-See the [{{% sc_gloss1 %}}current roadmap{{% /sc_gloss1 %}}](/roadmap) of OpenCensus.
-
-
----
-
-
-#### Partners & Contributors
-{{% /sc_index9 %}}
-{{% /sc_index8 %}}
diff --git a/content/integrations/_index.md b/content/integrations/_index.md
new file mode 100644
index 00000000..d4d7be67
--- /dev/null
+++ b/content/integrations/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Integrations"
+date: 2018-07-16T14:34:38-07:00
+draft: false
+weight: 70
+---
+
+{{% children %}}
diff --git a/content/integrations/caddy.md b/content/integrations/caddy.md
new file mode 100644
index 00000000..cac6b8f1
--- /dev/null
+++ b/content/integrations/caddy.md
@@ -0,0 +1,121 @@
+---
+title: "Caddy"
+date: 2018-07-16T14:42:17-07:00
+draft: false
+class: "integration-page"
+---
+
+
+
+#### Table of contents
+
+- [Background](#background)
+- [Enabling Observability](#enabling-observability)
+ - [Git checkout](#git-checkout)
+ - [Syntax](#syntax)
+ - [Variables Table](#variables-table)
+ - [Examples](#examples)
+
+#### Background
+
+Caddy is a modern server that is deployable for modern web services.
+With the increasing complexity and style of deployment of many web services,
+examining the behavior of the entire system quickly becomes very complex,
+and gains from any updates, optimizations or load shedding become
+very hard to qualify or examine.
+
+To fill this void, Caddy rightfully so deserves modern observability.
+
+Modern observability on the web server means that any operations after a request
+hits the web server can be observed, measured and examined irrespective of
+the destination of the request, whatever centric serving it performs or the big
+picture system behavior.
+
+Distributed Tracing and Monitoring is the mechanism by which we can gain
+insights into the behavior of a distributed system.
+
+Tracing gives us timing and territorial information about the progression
+of a request. Using context propagation on the transport e.g HTTP, we can
+send over information between remote services and after completion examine
+the propagations on various backends, without any vendor lock-in or any
+single cloud lock-in.
+
+With monitoring/metric collection, we can collect any quantifiable metrics such as:
+* Client and Server latency
+* Memory statistics
+* Runtime behavior
+
+An added advantage of vendor agnostic distributed tracing is that we can then export
+these traces and metrics simultaneously to a plethora of backends that your
+site reliability engineers and generally other developers can examine on such as:
+* Instana
+* Prometheus/Grafana
+* AWS X-Ray
+* Zipkin
+* Jaeger
+* DataDog
+* Stackdriver Monitoring and Tracing
+and many more
+
+However, the distributed tracing and monitoring framework should provide very low
+latency and optionality so that users of the server will not incur expensive overhead.
+Also the addition of observability the web server should not add a maintainence burden
+for your teams, the project maintainers either nor should they require sophisticated
+and specialized distributed systems and observability knowledge, nor should it require
+sophisicated infrastructure deployments.
+
+#### Enabling Observability
+
+```shell
+caddy -observability ";exporter1:[config1Key=config1Value:config2Key=config2Value...][,]exporter2...]"
+```
+
+##### Git checkout
+
+You can enable observability by checking out the instrumented branch of caddy
+```shell
+go get github.com/mholt/caddy/caddy
+cd $GOPATH/src/github.com/mholt/caddy
+git add orijtech git@github.com:orijtech/caddy.git && git fetch orijtech && git checkout instrument-with-opencensus && go get ./caddy
+```
+
+##### Syntax
+
+```xml
+observability := SamplerRate;Exporters
+SamplerRate := float64 value
+Exporters := [ExporterConfig](,ExporterConfig)?
+ExporterConfig := Name:[Key-ValuePairs]*
+Name := collection of symbolic names for exporters
+Key-Value Pair := KeyToken=ValueToken
+KeyToken := string
+ValueToken := string
+```
+
+##### Variables Table
+
+Exporter Name|Key|Type|Notes|Example
+---|---|---|---|---
+aws-xray|AWS_REGION|String|The region that your project is located in|`AWS_REGION=us-west-2`
+aws-xray|AWS_ACCESS_KEY_ID|Your access key|`AWS_ACCESS_KEY_ID=keyID`
+aws-xray|AWS_SECRET_ACCESS_KEY|Your access key|`AWS_SECRET_ACCESS_KEY=secretKey`
+jaeger|agent|The URL of the Jaeger|`caddy -observability "jaeger:agent=localhost:6831"`
+jaeger|collector|The URL of the Jaeger|`caddy -observability "jaeger:collector=http://localhost:9411"`
+jaeger|service-name|The service name when inspected by Jaeger|`caddy -observability "jaeger:service-name=search_endpoint"`
+prometheus|port|The port that will be scraped from your `prometheus.yml` file|`caddy -observability "prometheus:port=9987"`
+stackdriver|GOOGLE_APPLICATION_CREDENTIALS|File path for value|The credentials for your Google Cloud Platform project|`GOOGLE_APPLICATION_CREDENTIALS=~/creds.json caddy -observability "1;stackdriver:tracing=true"`
+stackdriver|monitoring|boolean for value|A commandline option to toggle monitoring|`caddy -observability "stackdriver:monitoring=true"`
+stackdriver|tracing|boolean for value|A commandline option to toggle tracing|`caddy -observability "stackdriver:tracing=true"`
+stackdriver|project-id|string for value|A commandline option|`caddy -observability "stackdriver:project-id=census-demos"`
+zipkin|local|URL|The URL of the local endpoint|`caddy -observability "zipkin:local=192.168.1.5:5454"`
+zipkin|reporter|URL|The URL of the reporter endpoint|`caddy -observability "zipkin:reporter=http://localhost:9411/api/v2/spans"`
+zipkin|service-name|string|The name of your service|`caddy -observability "service-name=server"`
+
+##### Examples
+
+* Comprehensive example running with all the environment variables too
+
+```shell
+GOOGLE_APPLICATION_CREDENTIALS=./creds.json AWS_REGION=us-west-2 AWS_ACCESS_KEY_ID=keyId AWS_SECRET_ACCESS_KEY=secretKey \
+ caddy -observability "0.9;zipkin,prometheus:port=8999,aws-xray,stackdriver:tracing=true:monitoring=true:project-id=census-demos,jaeger:agent=localhost:6831,service-name=search-endpoint"
+```
diff --git a/content/integrations/google_cloud_spanner/Go.md b/content/integrations/google_cloud_spanner/Go.md
new file mode 100644
index 00000000..48d797f5
--- /dev/null
+++ b/content/integrations/google_cloud_spanner/Go.md
@@ -0,0 +1,192 @@
+---
+title: "Go"
+date: 2018-07-24T15:14:00-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of a couple of APIs
+
+API|Guided codelab
+---|---
+Spanner|[Spanner codelab](/codelabs/spanner)
+Stackdriver |[Stackdriver codelab](/codelabs/stackdriver)
+{{% /notice %}}
+
+Cloud Spanner's Go package was already instrumented for:
+* Tracing with OpenCensus
+* Metrics with gRPC
+
+## Table of contents
+- [Packages to import](#packages-to-import)
+- [Enable metric reporting](#register-views)
+ - [Register client metric views](#register-client-metric-views)
+ - [Register server metric views](#register-server-metric-views)
+- [Enable tracing](#enable-tracing)
+- [End to end code sample](#end-to-end-code-sample)
+- [Viewing your traces](#viewing-your-traces)
+- [Viewing your metrics](#viewing-your-metrics)
+
+#### Packages to import
+
+For tracing and metrics on Spanner, we'll import a couple of packages
+
+Package Name|Package link
+---|---
+The Cloud Spanner Go package|[cloud.google.com/spanner](https://godoc.org/cloud.google.com/spanner)
+The OpenCensus trace package|[go.opencensus.io/trace](https://godoc.org/go.opencensus.io/trace)
+The OpenCensus metrics views package|[go.opencensus.io/stats](https://godoc.org/go.opencensus.io/stats)
+The OpenCensus gRPC plugin|[go.opencensus.io/plugin/ocgrpc](https://godoc.org/go.opencensus.io/plugin/ocgrpc)
+
+And when imported in code
+{{}}
+import (
+ "cloud.google.com/spanner"
+ "go.opencensus.io/plugin/ocgrpc"
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+)
+{{}}
+
+#### Enable metric reporting
+
+To enable metric reporting/exporting, we need to enable a metrics exporter, but before that we'll need
+to register and enable the views that match the metrics to collect. For a complete list of the available views
+available please visit [https://godoc.org/go.opencensus.io/plugin/ocgrpc](https://godoc.org/go.opencensus.io/plugin/ocgrpc)
+
+However, for now we'll split them into client and server views
+
+##### Register client metric views
+{{}}
+if err := view.Register(ocgrcp.DefaultClientViews...); err != nil {
+ log.Fatalf("Failed to register gRPC client views: %v", err)
+}
+{{}}
+
+##### Register server metric views
+{{}}
+if err := view.Register(ocgrcp.DefaultServerViews...); err != nil {
+ log.Fatalf("Failed to register gRPC server views: %v", err)
+}
+{{}}
+
+##### Exporting traces and metrics
+The last step is to enable trace and metric exporting. For that we'll use say [Stackdriver Exporter](/supported-exporters/go/stackdriver) or
+any of the [Go exporters](/supported-exporters/go/)
+
+##### End to end code sample
+With all the steps combined, we'll finally have this code snippet
+{{}}
+package main
+
+import (
+ "fmt"
+ "log"
+ "time"
+
+ "cloud.google.com/go/spanner"
+ "golang.org/x/net/context"
+
+ "go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/plugin/ocgrpc"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/trace"
+)
+
+type Player struct {
+ FirstName string `spanner:"first_name"`
+ LastName string `spanner:"last_name"`
+ UUID string `spanner:"uuid"`
+ Email string `spanner:"email"`
+}
+
+func main() {
+ se, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: "census-demos",
+ MetricPrefix: "spanner-oc-demo",
+ })
+ if err != nil {
+ log.Fatalf("StatsExporter err: %v", err)
+ }
+ // Let's ensure that the Stackdriver exporter uploads all its data before the program exits
+ defer se.Flush()
+
+ // Enable tracing
+ trace.RegisterExporter(se)
+
+ // Enable metrics collection
+ view.RegisterExporter(se)
+
+ // Register all the gRPC client views
+ if err := view.Register(ocgrpc.DefaultClientViews...); err != nil {
+ log.Fatalf("Failed to register gRCP default client views for metrics: %v", err)
+ }
+ // Register all the gRPC server views
+ if err := view.Register(ocgrpc.DefaultServerViews...); err != nil {
+ log.Fatalf("Failed to register gRCP default server views for metrics: %v", err)
+ }
+
+ // Enable the trace sampler.
+ // We are always sampling for demo purposes only: it is very high
+ // depending on the QPS, but sufficient for the purpose of this quick demo.
+ // More realistically perhaps tracing 1 in 10,000 might be more useful
+ trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
+
+ ctx := context.Background()
+
+ // The database must exist
+ databaseName := "projects/census-demos/instances/census-demos/databases/demo1"
+ sessionPoolConfig := spanner.SessionPoolConfig{MinOpened: 5, WriteSessions: 1}
+ client, err := spanner.NewClientWithConfig(ctx, databaseName, spanner.ClientConfig{SessionPoolConfig: sessionPoolConfig})
+ if err != nil {
+ log.Fatalf("SpannerClient err: %v", err)
+ }
+ defer client.Close()
+
+ // Warm up the spanner client session. In normal usage
+ // you'd have hit this point after the first operation.
+ _, _ = client.Single().ReadRow(ctx, "Players", spanner.Key{"foo@gmail.com"}, []string{"email"})
+
+ for i := 0; i < 3; i++ {
+ ctx, span := trace.StartSpan(ctx, "create-players")
+
+ players := []*Player{
+ {FirstName: "Poke", LastName: "Mon", Email: "poke.mon@example.org", UUID: "f1578551-eb4b-4ecd-aee2-9f97c37e164e"},
+ {FirstName: "Go", LastName: "Census", Email: "go.census@census.io", UUID: "540868a2-a1d8-456b-a995-b324e4e7957a"},
+ {FirstName: "Quick", LastName: "Sort", Email: "q.sort@gmail.com", UUID: "2b7e0098-a5cc-4f32-aabd-b978fc6b9710"},
+ }
+ up := fmt.Sprintf("%d-%d.", i, time.Now().Unix())
+ for _, player := range players {
+ player.Email = up + player.Email
+ }
+
+ if err := newPlayers(ctx, client, players...); err != nil {
+ log.Printf("Creating newPlayers err: %v", err)
+ }
+ span.End()
+ }
+}
+
+func newPlayers(ctx context.Context, client *spanner.Client, players ...*Player) error {
+ var ml []*spanner.Mutation
+ for _, player := range players {
+ m, err := spanner.InsertStruct("Players", player)
+ if err != nil {
+ return err
+ }
+ ml = append(ml, m)
+ }
+ _, err := client.Apply(ctx, ml)
+ return err
+}
+{{}}
+
+#### Viewing your metrics
+Please visit [https://console.cloud.google.com/monitoring](https://console.cloud.google.com/monitoring)
+
+#### Viewing your traces
+Please visit [https://console.cloud.google.com/traces/traces](https://console.cloud.google.com/traces/traces)
diff --git a/content/integrations/google_cloud_spanner/Java.md b/content/integrations/google_cloud_spanner/Java.md
new file mode 100644
index 00000000..23ecd7fd
--- /dev/null
+++ b/content/integrations/google_cloud_spanner/Java.md
@@ -0,0 +1,300 @@
+---
+title: "Java"
+date: 2018-07-24T15:14:00-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of a couple of APIs
+
+API|Guided codelab
+---|---
+Spanner|[Spanner codelab](/codelabs/spanner)
+Stackdriver |[Stackdriver codelab](/codelabs/stackdriver)
+{{% /notice %}}
+
+Cloud Spanner's Java package was already instrumented for:
+* Tracing with OpenCensus
+* Metrics with gRPC
+
+## Table of contents
+- [Packages to import](#packages-to-import)
+ - [Pom.xml](#pom.xml)
+- [Enable metric reporting](#register-views)
+ - [Register client metric views](#register-client-metric-views)
+ - [Register server metric views](#register-server-metric-views)
+- [Enable tracing](#enable-tracing)
+- [End to end code sample](#end-to-end-code-sample)
+- [Running it](#running-it)
+ - [Maven install](#maven-install)
+ - [Run the code](#run-the-code)
+- [Viewing your traces](#viewing-your-traces)
+- [Viewing your metrics](#viewing-your-metrics)
+
+#### Packages to import
+
+For tracing and metrics on Spanner, we'll import a couple of packages
+
+Package Name|Package link
+---|---
+The Cloud Spanner Java package|[com.google.cloud.spanner](https://googlecloudplatform.github.io/google-cloud-java)
+The OpenCensus trace package|[io.opencensus.trace](https://www.javadoc.io/doc/io.opencensus/opencensus-trace)
+The OpenCensus Java gRPC views|[io.opencensus.contrib.grpc.metrics.RpcViews](https://github.com/census-instrumentation/opencensus-java/tree/master/contrib/grpc_metrics)
+
+And when imported in code:
+
+```java
+import com.google.cloud.spanner;
+import io.opencensus.contrib.grpc.metrics.RpcViews;
+import io.opencensus.trace.Tracing;
+```
+
+#### Enable metric reporting
+
+To enable metric reporting/exporting, we need to enable a metrics exporter, but before that we'll need
+to register and enable the views that match the metrics to collect. For a complete list of the available views
+available please visit [io.opencensus.contrib.grpc.metrics.RpcViews](https://github.com/census-instrumentation/opencensus-java/tree/master/contrib/grpc_metrics)
+
+Finally, we'll register all the views
+
+##### Register gRPC views
+
+```java
+RpcViews.registerAllGrpcViews();
+```
+
+##### Exporting traces and metrics
+The last step is to enable trace and metric exporting. For that we'll use say [Stackdriver Exporter](/supported-exporters/java/stackdriver) or
+any of the [Java exporters](/supported-exporters/java/)
+
+##### End to end code sample
+With all the steps combined, we'll finally have this code snippet
+
+{{}}
+{{}}
+package com.opencensus.examples;
+
+import com.google.cloud.spanner.DatabaseClient;
+import com.google.cloud.spanner.DatabaseId;
+import com.google.cloud.spanner.Key;
+import com.google.cloud.spanner.Mutation;
+import com.google.cloud.spanner.ResultSet;
+import com.google.cloud.spanner.Spanner;
+import com.google.cloud.spanner.SpannerOptions;
+import com.google.cloud.spanner.Statement;
+
+import io.opencensus.common.Scope;
+import io.opencensus.contrib.grpc.metrics.RpcViews;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+import io.opencensus.trace.Tracing;
+import io.opencensus.trace.samplers.Samplers;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class SpannerOpenCensusTutorial {
+ private DatabaseClient dbClient;
+ private Spanner spanner;
+
+ private static String parentSpanName = "create-players";
+ public SpannerOpenCensusTutorial(String instanceId, String databaseId) throws Exception {
+ // Instantiate the client.
+ SpannerOptions options = SpannerOptions.getDefaultInstance();
+ this.spanner = options.getService();
+
+ // And then create the Spanner database client.
+ this.dbClient = this.spanner.getDatabaseClient(DatabaseId.of(
+ options.getProjectId(), instanceId, databaseId));
+
+ // Next up let's install the exporter for Stackdriver tracing.
+ StackdriverTraceExporter.createAndRegister();
+ Tracing.getExportComponent().getSampledSpanStore()
+ .registerSpanNamesForCollection(Arrays.asList(parentSpanName));
+
+ // Then the exporter for Stackdriver monitoring/metrics.
+ StackdriverStatsExporter.createAndRegister();
+ RpcViews.registerAllGrpcViews();
+ }
+
+ public void close() {
+ this.spanner.close();
+ }
+
+ public void warmUpRead() {
+ this.dbClient.singleUse().readRow("Players", Key.of("foo@gmail.com"), Arrays.asList("email"));
+ }
+
+ public static void main(String ...args) throws Exception {
+ if (args.length < 2) {
+ System.err.println("Usage: ZeuSports ");
+ return;
+ }
+
+ try {
+ SpannerOpenCensusTutorial zdb = new SpannerOpenCensusTutorial(args[0], args[1]);
+ // Warm up the spanner client session. In normal usage
+ // you'd have hit this point after the first operation.
+ zdb.warmUpRead();
+
+ for (int i=0; i < 3; i++) {
+ String up = i + "-" + (System.currentTimeMillis() / 1000) + ".";
+ List mutations = Arrays.asList(
+ playerMutation("Poke", "Mon", up + "poke.mon@example.org", "f1578551-eb4b-4ecd-aee2-9f97c37e164e"),
+ playerMutation("Go", "Census", up + "go.census@census.io", "540868a2-a1d8-456b-a995-b324e4e7957a"),
+ playerMutation("Quick", "Sort", up + "q.sort@gmail.com", "2b7e0098-a5cc-4f32-aabd-b978fc6b9710")
+ );
+
+ zdb.insertPlayers(mutations);
+ }
+
+ zdb.close();
+ } catch (Exception e) {
+ System.out.printf("Exception while adding player: " + e);
+ } finally {
+ System.out.println("Bye!");
+ }
+ }
+
+ public static Mutation playerMutation(String firstName, String lastName, String email, String uuid) {
+ return Mutation.newInsertBuilder("Players")
+ .set("first_name")
+ .to(firstName)
+ .set("last_name")
+ .to(lastName)
+ .set("uuid")
+ .to(uuid)
+ .set("email")
+ .to(email)
+ .build();
+ }
+
+ public void insertPlayers(List players) throws Exception {
+ try (Scope ss = Tracing.getTracer()
+ .spanBuilderWithExplicitParent(parentSpanName, null)
+ // Enable the trace sampler.
+ // We are always sampling for demo purposes only: this is a very high sampling
+ // rate, but sufficient for the purpose of this quick demo.
+ // More realistically perhaps tracing 1 in 10,000 might be more useful
+ .setSampler(Samplers.alwaysSample())
+ .startScopedSpan()) {
+
+ this.dbClient.write(players);
+ } finally {
+ }
+ }
+}
+{{}}
+
+{{}}
+
+ 4.0.0
+ com.opencensus.tutorials
+ opencensus-tutorials
+ jar
+ 1.0-SNAPSHOT
+ opencensus-examples
+ http://maven.apache.org
+
+
+ 1.8
+ 1.8
+ UTF-8
+ 0.11.0
+
+
+
+
+ com.google.cloud
+ google-cloud-spanner
+ 0.33.0-beta
+
+
+ com.google.guava
+ guava-jdk5
+
+
+ io.opencensus
+ opencensus-api
+
+
+
+
+
+ com.google.guava
+ guava
+ 20.0
+
+
+
+ io.opencensus
+ opencensus-api
+ 0.11.0
+
+
+
+ io.opencensus
+ opencensus-exporter-stats-stackdriver
+ 0.11.0
+
+
+
+ io.opencensus
+ opencensus-exporter-trace-stackdriver
+ 0.11.0
+
+
+
+ io.opencensus
+ opencensus-contrib-grpc-metrics
+ 0.11.0
+
+
+
+
+ io.opencensus
+ opencensus-impl
+ 0.11.0
+ runtime
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ 1.4.0
+
+ com.opencensus.tutorials.spanner
+
+
+
+
+
+
+{{}}
+{{}}
+
+#### Running it
+##### Maven install
+```shell
+mvn install
+```
+
+##### Run the code
+```shell
+mvn exec:java -Dexec.mainClass=com.opencensus.tutorials.spanner -Dexec.args="census-demos demo1"
+```
+
+#### Viewing your metrics
+Please visit [https://console.cloud.google.com/monitoring](https://console.cloud.google.com/monitoring)
+
+#### Viewing your traces
+Please visit [https://console.cloud.google.com/traces/traces](https://console.cloud.google.com/traces/traces)
diff --git a/content/integrations/google_cloud_spanner/_index.md b/content/integrations/google_cloud_spanner/_index.md
new file mode 100644
index 00000000..7a5a5f40
--- /dev/null
+++ b/content/integrations/google_cloud_spanner/_index.md
@@ -0,0 +1,19 @@
+---
+title: "Google Cloud Spanner"
+date: 2018-07-24T14:20:00-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+Cloud Spanner is a fully managed, mission-critical, relational database service that offers transactional consistency at
+global scale, schemas, SQL (ANSI 2011 with extensions), and automatic, synchronous replication for high availability.
+
+However, OpenCensus was used to instrument Cloud Spanner clients in the following languages:
+{{%children%}}
+
+Or you can read an [all-in one blog post/tutorial](https://medium.com/@orijtech/cloud-spanner-instrumented-by-opencensus-and-exported-to-stackdriver-6ed61ed6ab4e)
+
+For more information, you can read about it here and get started [Spanner docs](https://cloud.google.com/spanner/docs)
diff --git a/content/integrations/google_cloud_storage/Go.md b/content/integrations/google_cloud_storage/Go.md
new file mode 100644
index 00000000..a355a447
--- /dev/null
+++ b/content/integrations/google_cloud_storage/Go.md
@@ -0,0 +1,183 @@
+---
+title: "Go"
+date: 2018-07-24T23:19:00-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of a couple of APIs
+
+API|Guided codelab
+---|---
+Storage |[Storage codelab](/codelabs/storage)
+Stackdriver |[Stackdriver codelab](/codelabs/stackdriver)
+{{% /notice %}}
+
+Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, managed by Google.
+For more information you can read about it here and get started [Storage docs](https://godoc.org/cloud.google.com/go/storage/docs)
+
+Cloud Storage 's Go package was already instrumented for:
+* Tracing with OpenCensus
+
+## Table of contents
+- [Packages to import](#packages-to-import)
+- [Technical detour](#technical-detour)
+- [Enable tracing](#enable-tracing)
+- [End to end code sample](#end-to-end-code-sample)
+- [Viewing your traces](#viewing-your-traces)
+
+#### Packages to import
+
+For tracing and metrics on Spanner, we'll import a couple of packages
+
+Package Name|Package link
+---|---
+The Cloud Storage Go package|[cloud.google.com/storage](https://godoc.org/cloud.google.com/storage)
+The OpenCensus trace package|[go.opencensus.io/trace](https://godoc.org/go.opencensus.io/trace)
+The OpenCensus stats packages|[go.opencensus.io/stats](https://godoc.org/go.opencensus.io/stats)
+The OpenCensus HTTP plugin package|[go.opencensus.io/plugin/ochttp](https://godoc.org/go.opencensus.io/plugin/ochttp)
+
+And when imported in code
+{{}}
+import (
+ "cloud.google.com/storage"
+ "go.opencensus.io/plugin/ochttp"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/trace"
+ "google.golang.org/api/option"
+)
+{{}}
+
+#### Technical detour
+
+Because GCS uses HTTP to connect to Google's backend, we'll need to enable metrics and tracing using a custom client
+for GCP uploads. The custom client will have an `ochttp` enabled transport and then the rest is simple
+
+##### Setting up the ochttp enabled transport
+
+{{}}
+import (
+ "context"
+ "net/http"
+
+ "go.opencensus.io/plugin/ochttp"
+ "google.golang.org/api/option"
+)
+
+hc := &http.Client{Transport: new(ochttp.Transport)}
+gcsClient := storage.NewClient(context.Background(), option.WithHTTPClient(hc))
+{{}}
+
+#### Enable metric reporting
+
+To enable metric reporting/exporting, we need to enable a metrics exporter, but before that we'll need
+to register and enable the views that match the HTTP metrics to collect. For a complete list of the available views
+available please visit [https://godoc.org/go.opencensus.io/plugin/ochttp](https://godoc.org/go.opencensus.io/plugin/ochttp)
+
+However, for now we'll split them into client and server views
+
+##### Register client metric views
+{{}}
+if err := view.Register(ochttp.DefaultClientViews...); err != nil {
+ log.Fatalf("Failed to register HTTP client views: %v", err)
+}
+{{}}
+
+##### Register server metric views
+{{}}
+if err := view.Register(ochttp.DefaultServerViews...); err != nil {
+ log.Fatalf("Failed to register HTTP server views: %v", err)
+}
+{{}}
+
+##### Exporting traces and metrics
+The last step is to enable trace and metric exporting. For that we'll use say [Stackdriver Exporter](/supported-exporters/go/stackdriver) or
+any of the [Go exporters](/supported-exporters/go/)
+
+##### End to end code sample
+With all the steps combined, we'll finally have this code snippet
+{{}}
+package main
+
+import (
+ "context"
+ "io"
+ "log"
+ "os"
+
+ "cloud.google.com/go/storage"
+ "golang.org/x/oauth2/google"
+ "google.golang.org/api/option"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/plugin/ochttp"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ // Create the Stackdriver exporter
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: "census-demos",
+ MetricPrefix: "gcs-oc",
+ })
+ if err != nil {
+ log.Fatalf("Failed to create Stackdriver exporter: %v", err)
+ }
+ defer sd.Flush()
+
+ // And for the custom transport to enable metrics collection
+ ctx := context.Background()
+ dc, err := google.DefaultClient(ctx, storage.ScopeReadWrite)
+ if err != nil {
+ log.Fatalf("Failed to create the google OAuth2 client: %v", err)
+ }
+ // Enable ochttp.Transport on the base transport
+ dc.Transport = &ochttp.Transport{Base: dc.Transport}
+ gcsClient, err := storage.NewClient(ctx, option.WithHTTPClient(dc))
+ if err != nil {
+ log.Fatalf("Failed to create GCS client: %v", err)
+ }
+
+ if err := view.Register(ochttp.DefaultClientViews...); err != nil {
+ log.Fatalf("Failed to register HTTP client views: %v", err)
+ }
+
+ if err := view.Register(ochttp.DefaultServerViews...); err != nil {
+ log.Fatalf("Failed to register HTTP server views: %v", err)
+ }
+
+ // For the purposes of demo, we'll always sample
+ trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
+
+ bucket := gcsClient.Bucket("census-demos")
+ obj := bucket.Object("hello.txt")
+ // Write "Hello, world!" to the object
+ w := obj.NewWriter(ctx)
+ if _, err := w.Write([]byte("Hello, world!")); err != nil {
+ log.Fatalf("Failed to write to object: %v", err)
+ }
+ if err := w.Close(); err != nil {
+ log.Fatalf("Failed to close object handle: %v", err)
+ }
+
+ // Now read back the content
+ r, err := obj.NewReader(ctx)
+ if err != nil {
+ log.Fatalf("Failed to read from object: %v", err)
+ }
+ defer r.Close()
+
+ _, _ = io.Copy(os.Stdout, r)
+}
+{{}}
+
+#### Viewing your metrics
+Please visit [https://console.cloud.google.com/monitoring](https://console.cloud.google.com/monitoring)
+
+#### Viewing your traces
+Please visit [https://console.cloud.google.com/traces/traces](https://console.cloud.google.com/traces/traces)
diff --git a/content/integrations/google_cloud_storage/_index.md b/content/integrations/google_cloud_storage/_index.md
new file mode 100644
index 00000000..a9758ad0
--- /dev/null
+++ b/content/integrations/google_cloud_storage/_index.md
@@ -0,0 +1,17 @@
+---
+title: "Google Cloud Storage"
+date: 2018-07-24T14:28:00-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+
+Cloud Storage is a unified object storage for developers and enterprises, managed by Google.
+For more information you can read about it here and get started [Storage docs](https://cloud.google.com/storage/docs)
+
+However, OpenCensus is already integrated into some Google Cloud Storage API clients in the various languages:
+
+{{%children%}}
diff --git a/content/integrations/groupcache.md b/content/integrations/groupcache.md
new file mode 100644
index 00000000..4c09264f
--- /dev/null
+++ b/content/integrations/groupcache.md
@@ -0,0 +1,14 @@
+---
+title: "GroupCache"
+date: 2018-07-16T14:42:31-07:00
+draft: false
+class: "integration-page"
+---
+
+#### Resources
+
+* [Go driver](https://github.com/orijtech/groupcache)
+
+#### Tutorials
+
+* [Go](https://medium.com/@orijtech/groupcache-instrumented-by-opencensus-6a625c3724c)
diff --git a/content/integrations/memcached.md b/content/integrations/memcached.md
new file mode 100644
index 00000000..2d077982
--- /dev/null
+++ b/content/integrations/memcached.md
@@ -0,0 +1,17 @@
+---
+title: "Memcached"
+date: 2018-07-16T14:42:06-07:00
+draft: false
+class: "integration-page"
+---
+
+
+
+#### Resources
+
+* [Go Driver](https://github.com/orijtech/gomemcache)
+* [Python Driver](https://github.com/orijtech/pymemcache/pull/1)
+
+#### Tutorials
+
+* [Go & Python](https://medium.com/@orijtech/memcached-clients-instrumented-with-opencensus-in-go-and-python-dacbd01b269c)
diff --git a/content/integrations/mongodb.md b/content/integrations/mongodb.md
new file mode 100644
index 00000000..5fec8d4c
--- /dev/null
+++ b/content/integrations/mongodb.md
@@ -0,0 +1,17 @@
+---
+title: "MongoDB"
+date: 2018-07-16T14:42:10-07:00
+draft: false
+class: "integration-page"
+---
+
+
+
+#### Resources
+
+* [Go Driver](https://github.com/orijtech/mongo-go-driver)
+* [Java Driver](https://github.com/orijtech/mongo-java-driver/pull/1)
+
+#### Tutorials
+
+* [Go](https://medium.com/@orijtech/mongodb-driver-instrumented-with-opencensus-in-go-e691370b8184)
diff --git a/content/integrations/redis/Go.md b/content/integrations/redis/Go.md
new file mode 100644
index 00000000..4e1c0727
--- /dev/null
+++ b/content/integrations/redis/Go.md
@@ -0,0 +1,15 @@
+---
+title: "Go"
+date: 2018-07-24T07:09:03-07:00
+draft: false
+class: "integration-page"
+---
+
+
+
+Some Redis clients were already instrumented to provide traces and metrics with OpenCensus
+
+Packages|Godoc
+---|---
+gomodule/redigo|https://godoc.org/github.com/opencensus-integrations/redigo
+goredis/redis|https://godoc.org/github.com/opencensus-integrations/redis
diff --git a/content/integrations/redis/Java.md b/content/integrations/redis/Java.md
new file mode 100644
index 00000000..310b3046
--- /dev/null
+++ b/content/integrations/redis/Java.md
@@ -0,0 +1,254 @@
+---
+title: "Java"
+date: 2018-07-25T09:38:03-07:00
+draft: false
+class: "integration-page"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Redis. If you don't yet have an installation of Redis, please [Click here to get started](https://redis.io/topics/quickstart)
+{{% /notice %}}
+
+{{% notice note %}}
+This guide makes use of Stackdriver. If you haven't yet, please [Click here to get started with Stackdriver](/codelabs/stackdriver)
+{{% /notice %}}
+
+
+Some Redis clients were already instrumented to provide traces and metrics with OpenCensus
+
+Packages|Repository link
+---|---
+jedis|https://github.com/opencensus-integrations/jedis
+
+## Table of contents
+- [Generating the JAR](#generating-the-jar)
+ - [Clone this repository](#clone-this-repository)
+ - [Generate and install](#generate-and-install)
+- [Available metrics](#available-metrics)
+- [Enabling observability](#enabling-observability)
+- [End to end example](#end-to-end-example)
+ - [Running it](#running-it)
+
+#### Generating the JAR
+
+##### Clone this repository
+```shell
+git clone https://github.com/opencensus-integrations
+```
+
+##### Generating the JAR
+Inside the cloned repository's directory run
+```shell
+mvn install:install-file -Dfile=$(pwd)/target/jedis-3.0.0-SNAPSHOT.jar \
+-DgroupId=redis.clients -DartifactId=jedis -Dversion=3.0.0 \
+-Dpackaging=jar -DgeneratePom=true
+```
+
+#### Enabling observability
+To enable observability, we'll need to use Jedis normally but with one change
+
+{{}}
+import redis.clients.jedis.Observability;
+{{}}
+
+and then finally to enable metrics
+{{}}
+// Enable exporting of all the Jedis specific metrics and views
+Observability.registerAllViews();
+{{}}
+
+#### Available metrics
+Metric search suffix|Description
+---|---
+redis/bytes_read|The number of bytes read from the Redis server
+redis/bytes_written|The number of bytes written out to the Redis server
+redis/dials|The number of connection dials made to the Redis server
+redis/dial_latency_milliseconds|The number of milliseconds spent performing Redis operations
+redis/errors|The number of errors encountered
+redis/connections_opened|The number of new connections
+redis/roundtrip_latency|The latency spent for various Redis operations
+redis/reads|The number of reads performed
+redis/writes|The number of writes performed
+
+#### End to end example
+{{}}
+
+{{}}
+package io.opencensus.tutorials.jedis;
+
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+import io.opencensus.trace.Tracing;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.config.TraceParams;
+import io.opencensus.trace.samplers.Samplers;
+import java.io.BufferedReader;
+import java.io.InputStreamReader;
+import java.io.IOException;
+
+import redis.clients.jedis.Jedis;
+import redis.clients.jedis.Observability;
+
+public class JedisOpenCensus {
+ private static final Jedis jedis = new Jedis("localhost");
+
+ public static void main(String ...args) {
+ // Enable exporting of all the Jedis specific metrics and views.
+ Observability.registerAllViews();
+
+ // Now enable OpenCensus exporters
+ setupOpenCensusExporters();
+
+ // Now for the repl
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ System.out.print("> ");
+ System.out.flush();
+ String query = stdin.readLine();
+
+ // Check Redis if we've got a hit firstly
+ String result = jedis.get(query);
+ if (result == null || result == "") {
+ // Cache miss so process it and memoize it
+ result = "$" + query + "$";
+ jedis.set(query, result);
+ }
+ System.out.println("< " + result + "\n");
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static void setupOpenCensusExporters() {
+ String gcpProjectId = "census-demos";
+
+ try {
+ StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+
+ StackdriverStatsExporter.createAndRegister(
+ StackdriverStatsConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+ } catch (Exception e) {
+ System.err.println("Failed to setup OpenCensus " + e);
+ }
+
+ // Change the sampling rate to always sample
+ TraceConfig traceConfig = Tracing.getTraceConfig();
+ traceConfig.updateActiveTraceParams(
+ traceConfig.getActiveTraceParams().toBuilder().setSampler(Samplers.alwaysSample()).build());
+ }
+}
+{{}}
+
+{{}}
+
+ 4.0.0
+ io.ocgrpc
+ ocgrpc
+ jar
+ 1.0-SNAPSHOT
+ ocgrpc
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.15.0
+
+
+
+
+ io.opencensus
+ opencensus-exporter-stats-stackdriver
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+ runtime
+
+
+
+ io.opencensus
+ opencensus-exporter-trace-stackdriver
+ ${opencensus.version}
+
+
+
+ redis.clients
+ jedis
+ 3.0.0
+
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ JedisOpenCensus
+ io.opencensus.tutorials.jedis.JedisOpenCensus
+
+
+
+
+
+
+
+
+
+{{}}
+{{}}
+
+##### Running it
+```shell
+mvn install && mvn exec:java -Dexec.mainClass=io.opencensus.tutorials.jedis.JedisOpenCensus
+```
+
+#### Examining your metrics and traces
diff --git a/content/integrations/redis/_index.md b/content/integrations/redis/_index.md
new file mode 100644
index 00000000..fc5a0655
--- /dev/null
+++ b/content/integrations/redis/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Redis"
+date: 2018-07-16T14:42:03-07:00
+draft: false
+class: "integration-page"
+---
+
+
+
+Some Redis clients have been instrumented with OpenCensus to provide metrics and traces.
+We have integrations in the following languages
+{{% children %}}
+
+Please see the [blog post announcement and examples](https://medium.com/@orijtech/redis-clients-instrumented-by-opencensus-in-java-and-go-402470d92c5c)
diff --git a/content/integrations/sql.md b/content/integrations/sql.md
new file mode 100644
index 00000000..82eaec1d
--- /dev/null
+++ b/content/integrations/sql.md
@@ -0,0 +1,14 @@
+---
+title: "SQL"
+date: 2018-07-16T14:42:23-07:00
+draft: false
+class: "integration-page"
+---
+
+#### Resources
+
+* [ocsql](https://github.com/basvanbeek/ocsql): A database wrapper for SQL in Go
+
+#### Tutorials
+
+* [Go](https://medium.com/@bas.vanbeek/opencensus-and-go-database-sql-322a26be5cc5)
diff --git a/content/introduction/_index.md b/content/introduction/_index.md
new file mode 100644
index 00000000..65ab203e
--- /dev/null
+++ b/content/introduction/_index.md
@@ -0,0 +1,21 @@
+---
+title: "Introduction"
+date: 2018-07-16T14:28:03-07:00
+draft: false
+weight: 10
+---
+
+{{}} is a vendor-agnostic single distribution of libraries to provide observability for your microservices and monoliths alike. OpenCensus is an open source community developed project. Its origins stem from the rewrite of what has been Google's observability systems for the past 10 years.
+
+OpenCensus is implemented in a plethora of languages and it allows you to singly collect metrics and traces
+and then export them to a variety of backends.
+
+In this section we will walk through what OpenCensus is, what problems it solves, and how it can help your project.
+
+{{% children %}}
+
+Or, if you are ready to integrate OpenCensus in to your project, visit the [Quickstart](/quickstart).
+
+#### Partners & Contributors
+
+{{}}
diff --git a/content/introduction/features.md b/content/introduction/features.md
new file mode 100644
index 00000000..92677b22
--- /dev/null
+++ b/content/introduction/features.md
@@ -0,0 +1,18 @@
+---
+title: "Features"
+date: 2018-07-16T14:28:16-07:00
+draft: false
+weight: 2
+---
+
+#### Low latency
+OpenCensus is simple to integrate and use, it adds very low latency to your applications and it is already integrated into both gRPC and HTTP transports.
+
+#### Vendor Agnosticity
+OpenCensus is vendor-agnostic and can upload data to any backend with various exporter implementations. Even though, OpenCensus provides support for many backends, users can also implement their own exporters for proprietary and unofficially supported backends. [Read more](/core-concepts/exporters/).
+
+#### Simplified tracing
+Distributed traces track the progression of a single user request as it is handled by the internal services until the user request is responded. [Read more](/core-concepts/tracing/).
+
+#### Context Propagation
+Context propagation is the mechanism by which information (of your choosing) is sent between your services. It is usually performed by sending data in HTTP headers and trailers say across HTTP and gRPC transports.
diff --git a/content/introduction/overview.md b/content/introduction/overview.md
new file mode 100644
index 00000000..8d1d8f45
--- /dev/null
+++ b/content/introduction/overview.md
@@ -0,0 +1,34 @@
+---
+title: "Overview"
+date: 2018-07-16T14:28:09-07:00
+draft: false
+weight: 1
+class: "resized-logo"
+---
+
+OpenCensus provides observability for your microservices and monoliths alike.
+
+It gives you the tools to track a request as it travels through each of your services, and it helps you gather any metrics of interest.
+
+The core functionality of OpenCensus is to collect traces and metrics from your app, display them locally, and send them to any analysis tool. However, OpenCensus provides a lot more than just data insight. This page describes some of that functionality and points you to resources for building it into your app.
+
+After [instrumenting](https://en.wikipedia.org/wiki/Instrumentation_(computer_programming)#Output) your code with OpenCensus, you will equip yourself with the ability to optimize the speed of your services, understand exactly how a request travels between your services, gather any useful metrics about your entire architecture, and more.
+
+{{}}
+
+{{% notice note %}}
+Already familiar with OpenCensus? [Click here](https://opencensus.io/docs/) for a full overview of our API to get started.
+{{% /notice %}}
+
+## Why OpenCensus?
+
+##### Visualize Request Lifecycle
+You can track a request as it propagates throughout all of your services. Additionally, you can visualize your data in any [backend](/core-concepts/exporters/#supported-backends) with a one-step solution.
+
+##### Perform Root-Cause Analysis
+Gain total situation-clarity in your distributed services architecture. When your service runs in to a problem, use OpenCensus to identify the point of failure.
+
+##### Optimize Service Latency
+Become empowered to optimize every component in your distributed services by gaining key insights in to the latency and performance of every microservice and data storage you manage.
+
+
diff --git a/content/java.md b/content/java.md
deleted file mode 100644
index 949f4687..00000000
--- a/content/java.md
+++ /dev/null
@@ -1,145 +0,0 @@
-+++
-Description = "java"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "leftnav"
-title = "Java"
-date = "2018-05-18T09:59:40-05:00"
-+++
-
-The example demonstrates how to record stats and traces for a video processing system. It records data with the “frontend” tag so that collected data can be broken by the frontend user who initiated the video processing.
-
----
-
-#### API Documentation
-
-The OpenCensus Java artifacts are released to Maven Central [maven.org](http://search.maven.org/), each contains the jar file, the associated javadoc, and the associated source code. The OpenCensus Java API artifact (along with the associated javadoc and source) is available on Maven Central here: [Open Census API](https://search.maven.org/#search%7Cga%7C1%7Copencensus%20api)
-
----
-
-#### Example
-
-1. Clone the OpenCensus Java GitHub repository:
-``` java
-git clone https://github.com/census-instrumentation/opencensus-java.git
-cd opencensus-java/examples
-```
-
-2. Code is in the following directory:
-```
-src/main/java/io/opencensus/examples/helloworld/
-```
-
----
-
-#### To Build/Run The Example
-Further build instructions can be found in [examples/README.md](https://github.com/census-instrumentation/opencensus-java/blob/master/examples/README.md).
-
-The OpenCensus Java Quickstart example can be built/executed using either gradle, maven, or bazel:
-
-3. Build the example code e.g.: (assuming current directory is opencensus/examples/)
- * Gradle: ./gradlew installDist
- * Maven: mvn package appassembler:assemble
- * Bazel: bazel build :all
-
-4. Run the Quickstart example e.g.: (assuming current directory is opencensus/examples/)
- * Gradle: ./build/install/opencensus-examples/bin/QuickStart
- * Maven: ./target/appassembler/bin/QuickStart
- * Bazel: ./bazel-bin/QuickStart
-
----
-
-#### The Example Code
-``` java
-/** Simple program that collects data for video size. **/
-public final class QuickStart {
- static Logger logger = Logger.getLogger(
- QuickStart.class.getName());
- static Tagger tagger = Tags.getTagger();
- static ViewManager viewManager = Stats.getViewManager();
- static StatsRecorder statsRecorder = Stats.getStatsRecorder();
- static Tracer tracer = Tracing.getTracer();
-
- // frontendKey allows us to break down the recorded data
- static final TagKey FRONTEND_KEY = TagKey.create(
- "my.org/keys/frontend");
-
- // videoSize will measure the size of processed videos.
- static final MeasureLong VIDEO_SIZE = MeasureLong.create(
- "my.org/measure/video_size",
- "size of processed videos",
- "MBy");
-
- // Create view to see the processed video size distribution broken
- // down by frontend. The view has bucket boundaries (0, 256, 65536)
- // that will group measure values into histogram buckets.
- private static final View.Name VIDEO_SIZE_VIEW_NAME =
- View.Name.create("my.org/views/video_size");
- private static final View VIDEO_SIZE_VIEW = View.create(
- VIDEO_SIZE_VIEW_NAME,
- "processed video size over time",
- VIDEO_SIZE,
- Aggregation.Distribution.create(
- BucketBoundaries.create(Arrays.asList(0.0, 256.0, 65536.0))),
- Collections.singletonList(FRONTEND_KEY),
- Cumulative.create());
-
- /** Main launcher for the QuickStart example. */
- public static void main(String[] args) throws
- InterruptedException {
- TagContextBuilder tagContextBuilder = tagger.currentBuilder()
- .put(FRONTEND_KEY, TagValue.create("mobile-ios9.3.5"));
- SpanBuilder spanBuilder = tracer.spanBuilder(
- "my.org/ProcessVideo")
- .setRecordEvents(true)
- .setSampler(Samplers.alwaysSample());
- viewManager.registerView(VIDEO_SIZE_VIEW);
- LoggingTraceExporter.register();
-
- // Process video. Record the processed video size.
- try (Scope scopedTags = tagContextBuilder.buildScoped();
- Scope scopedSpan = spanBuilder.startScopedSpan()) {
- tracer.getCurrentSpan()
- .addAnnotation("Start processing video.");
- // Sleep for [0,10] milliseconds to fake work.
- Thread.sleep(new Random().nextInt(10) + 1);
- statsRecorder.newMeasureMap().put(VIDEO_SIZE, 25648).record();
- tracer.getCurrentSpan()
- .addAnnotation("Finished processing video.");
- } catch (Exception e) {
- tracer.getCurrentSpan()
- .addAnnotation("Exception thrown when processing video.");
- tracer.getCurrentSpan().setStatus(Status.UNKNOWN);
- logger.severe(e.getMessage());
- }
-
- logger.info("Wait longer than the reporting duration...");
- // Wait for a duration longer than reporting duration (5s) to
- // ensure spans are exported.
- Thread.sleep(5100);
- ViewData viewData = viewManager.getView(VIDEO_SIZE_VIEW_NAME);
- logger.info(
- String.format("Recorded stats for %s:\n %s",
- VIDEO_SIZE_VIEW_NAME.asString(), viewData));
- }
-}
-```
-
----
-
-#### The Example Output (Raw)
-```
-Mar 02, 2018 6:38:26 PM io.opencensus.examples.helloworld.QuickStart main
-INFO: Wait longer than the reporting duration...
-Mar 02, 2018 6:38:31 PM
-io.opencensus.exporter.trace.logging.LoggingTraceExporter
- $LoggingExporterHandler export
-INFO:
-SpanData{context=SpanContext{traceId=TraceId{traceId=
- 6490d4a26cffac83529a7679a0ef978b}, spanId=SpanId{spanId=2e1f17c65921d367}, traceOptions=TraceOptions{sampled=true}}, parentSpanId=null, hasRemoteParent=null, name=my.org/ProcessVideo, startTimestamp=Timestamp{seconds=1520044706, nanos=41005486}, attributes=Attributes{attributeMap={}, droppedAttributesCount=0}, annotations=TimedEvents{events=[TimedEvent{timestamp=Timestamp{seconds=1520044706, nanos=44080800}, event=Annotation{description=Start processing video., attributes={}}}, TimedEvent{timestamp=Timestamp{seconds=1520044706, nanos=53061607}, event=Annotation{description=Finished processing video., attributes={}}}], droppedEventsCount=0}, messageEvents=TimedEvents{events=[], droppedEventsCount=0}, links=Links{links=[], droppedLinksCount=0}, childSpanCount=null, status=Status{canonicalCode=OK, description=null}, endTimestamp=Timestamp{seconds=1520044706, nanos=54084898}}
-
-Mar 02, 2018 6:38:31 PM
-io.opencensus.examples.helloworld.QuickStart main INFO:
-Recorded stats for my.org/views/video_size: ViewData{view=View{name=Name{asString=my.org/views/video_size}, description=processed video size over time, measure=MeasureLong{name=my.org/measure/video_size, description=size of processed videos, unit=MBy}, aggregation=Distribution{bucketBoundaries=BucketBoundaries{boundaries=[0.0, 256.0, 65536.0]}}, columns=[TagKey{name=my.org/keys/frontend}], window=Cumulative{}}, aggregationMap={[TagValue{asString=mobile-ios9.3.5}]=DistributionData{mean=25648.0, count=1, min=25648.0, max=25648.0, sumOfSquaredDeviations=0.0, bucketCounts=[0, 0, 1, 0]}}, windowData=CumulativeData{start=Timestamp{seconds=1520044706, nanos=28000000}, end=Timestamp{seconds=1520044711, nanos=170000000}}}
-```
diff --git a/content/language-support/_index.md b/content/language-support/_index.md
new file mode 100644
index 00000000..fd000aed
--- /dev/null
+++ b/content/language-support/_index.md
@@ -0,0 +1,19 @@
+---
+title: "Language Support"
+date: 2018-07-16T14:37:29-07:00
+draft: false
+weight: 40
+class: "resized-logo"
+---
+
+
+
+Language|API reference
+---|---
+Go|https://godoc.org/go.opencensus.io
+Java|https://www.javadoc.io/doc/io.opencensus/opencensus-api/
+C++|https://github.com/census-instrumentation/opencensus-cpp
+Ruby|https://www.rubydoc.info/gems/opencensus
+Erlang|https://hexdocs.pm/opencensus/
+Python|https://census-instrumentation.github.io/opencensus-python/trace/api/index.html
+PHP|https://packagist.org/packages/opencensus/opencensus
diff --git a/content/overview.md b/content/overview.md
deleted file mode 100644
index 9ae0d3e7..00000000
--- a/content/overview.md
+++ /dev/null
@@ -1,56 +0,0 @@
-+++
-title = "Overview"
-type = "leftnav"
-date = "2018-05-30T10:49:38-05:00"
-+++
-
-OpenCensus is a framework for stats collection and distributed tracing. It supports multiple backends.
-
-
-
-
-
-In microservices architectures, it is difficult to understand how services use resources across shared infrastructure. In monolithic systems, we depend on traditional tools that report per-process resource usage and latency characteristics that are limited to a single process. In order to be able to collect and analyze resource utilization and performance characteristics of distributed systems, OpenCensus tracks resource utilization through the chain of services processing a user request.
-
-__Data collected by OpenCensus can be used for:__
-
-* Monitoring of resource usage.
-* Analyzing performance and efficiency characteristics of systems to reduce the overall resource consumption of resources and improve latency.
-* Analyzing the collected data for capacity planning. Being able to predict the overall impact of a product on the infrastructure and being able to estimate how much more resources are required if a product grows.
-* Being able to debug problems in isolation in complex systems.
-
-__OpenCensus aims to provide:__
-
-* Low-overhead collection.
-* Standard wire protocols and consistent APIs for handling trace and stats data.
-* A single set of libraries for many languages, including Java, C++, Go, Python, PHP, Erlang, and Ruby.
-* Integrations with web and RPC frameworks, making traces and stats available out of the box. Full extendability in implementing additional integrations.
-* Exporters for storage and analysis tools. Full extendability in implementing additional integrations.
-* In process debugging: an optional handler for displaying request stats and traces on instrumented hosts.
-
-No additional server or daemon is required to support OpenCensus.
-
-## Concepts
-
-### Tags
-
-OpenCensus allows systems to associate measurements with dimensions as they are recorded. Recorded data allows us to breakdown the measurements, analyze them from various different perspectives and be able to target specific cases in isolation even in highly interconnected and complex systems. [Read more](/tags).
-
-### Stats
-
-*Stats* is collection allow libraries and applications to record measurements, aggregate the recorded data and export them. [Read more](/stats).
-
-### Traces
-
-*Distributed traces* track the progression of a single user request as it is handled by the internal services until the user request is responded. [Read more](/traces).
-
-### Exporters
-
-OpenCensus is vendor-agnostic and can upload data to any backend with various exporter implementations. Even though, OpenCensus provides support for many backends, users can also implement their own exporters for proprietary and unofficially supported backends. [Read more](/exporters).
-
-### Z-Pages
-
-OpenCensus provides in-process dashboards that displays diagnostics data from the process. These pages are called z-pages and they are useful to understand to see collected data from a specific process without having to depend on any metric collection or distributed tracing backend. [Read more](/zpages).
\ No newline at end of file
diff --git a/content/php.md b/content/php.md
deleted file mode 100644
index f483e941..00000000
--- a/content/php.md
+++ /dev/null
@@ -1,505 +0,0 @@
-+++
-Description = "php"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "leftnav"
-title = "PHP"
-date = "2018-05-18T12:37:28-05:00"
-+++
-
-The example demonstrates how to record traces for a simple website that calculate Fibonacci numbers recursively.
-
----
-
-#### API Documentation
-The OpenCensus libraries artifacts are released to Packagist ([packagist.org](https://packagist.org/)) [opencensus/opencensus](https://packagist.org/packages/opencensus/opencensus). The API documentation is available [here](https://census-instrumentation.github.io/opencensus-php/api/).
-
----
-#### Example
-1. Clone the OpenCensus PHP github repository:
-
-``` php
-git clone https://github.com/census-instrumentation/opencensus-php.git
-```
-
-2.Code is in directory:
-``` php
-examples/silex/
-```
-
----
-
-#### To Build/Run The Example
-1. Install dependencies via composer:
-
-``` php
-$ composer install
-```
-
-2. The OpenCensus PHP Quickstart example can be run using the build-in PHP webserver:
-
-``` php
-$ php -S localhost:8000 -t web
-```
-
-3. Make a HTTP request to hit the application:
-``` php
-$ curl http://localhost:8000/fib/3
-```
-
----
-
-#### The Example Code
-
-``` php
- 'fib',
- 'attributes' => [
- 'n' => $n
- ]
- ], function () use ($n) {
- if ($n < 3) {
- return $n;
- }
- return fib($n - 1) + fib($n - 2);
- });
-}
-
-$app = new Silex\Application();
-
-$app->get('/', function () {
- return 'Hello World!';
-});
-
-$app->get('/fib/{n}', function ($n) use ($app) {
- $n = (int) $n;
- $fib = fib($n);
- return sprintf('The %dth Fibonacci number is %d', $n, $fib);
-});
-
-$app->run();
-```
-
----
-
-#### The Example Output (Raw)
-```
-The 3th Fibonacci number is 3.Array
-(
- [0] => OpenCensus\Trace\Span Object
- (
- [traceId:OpenCensus\Trace\Span:private] =>
- [spanId:OpenCensus\Trace\Span:private] => 526d3545
- [parentSpanId:OpenCensus\Trace\Span:private] =>
- [name:OpenCensus\Trace\Span:private] => /fib/3
- [startTime:OpenCensus\Trace\Span:private] =>
- DateTime Object
- (
- [date] => 2018-03-22 19:47:00.739414
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [endTime:OpenCensus\Trace\Span:private] =>
- DateTime Object
- (
- [date] => 2018-03-22 19:47:00.794824
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [stackTrace:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [timeEvents:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [links:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [status:OpenCensus\Trace\Span:private] =>
- OpenCensus\Trace\Status Object
- (
- [code:OpenCensus\Trace\Status:private] => 200
- [message:OpenCensus\Trace\Status:private] =>
- HTTP status code: 200
- )
-
- [sameProcessAsParentSpan:OpenCensus\Trace\Span:private]
- =>
- [attributes:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- )
-
- [1] => OpenCensus\Trace\Span Object
- (
- [traceId:OpenCensus\Trace\Span:private] =>
- [spanId:OpenCensus\Trace\Span:private] => 60c9a7b2
- [parentSpanId:OpenCensus\Trace\Span:private] => 526d3545
- [name:OpenCensus\Trace\Span:private] => fib
- [startTime:OpenCensus\Trace\Span:private] => DateTime Object
- (
- [date] => 2018-03-22 19:47:00.788716
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [endTime:OpenCensus\Trace\Span:private] => DateTime Object
- (
- [date] => 2018-03-22 19:47:00.789070
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [stackTrace:OpenCensus\Trace\Span:private] => Array
- (
- [0] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 33
- [function] => fib
- )
-
- [1] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/symfony/http-
- kernel/HttpKernel.php
- [line] => 151
- [function] => {closure}
- )
-
- [2] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/symfony/http-
- kernel/HttpKernel.php
- [line] => 68
- [function] => handleRaw
- [class] => Symfony\Component\HttpKernel\HttpKernel
- [type] => ->
- )
-
- [3] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/silex/silex/src/
- Silex/Application.php
- [line] => 496
- [function] => handle
- [class] => Symfony\Component\HttpKernel\HttpKernel
- [type] => ->
- )
-
- [4] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/silex/silex/src/
- Silex/Application.php
- [line] => 477
- [function] => handle
- [class] => Silex\Application
- [type] => ->
- )
-
- [5] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 37
- [function] => run
- [class] => Silex\Application
- [type] => ->
- )
-
- )
-
- [timeEvents:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [links:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [status:OpenCensus\Trace\Span:private] =>
- [sameProcessAsParentSpan:OpenCensus\Trace\Span:private]
- =>
- [attributes:OpenCensus\Trace\Span:private] => Array
- (
- [n] => 3
- )
-
- )
-
- [2] => OpenCensus\Trace\Span Object
- (
- [traceId:OpenCensus\Trace\Span:private] =>
- [spanId:OpenCensus\Trace\Span:private] => 2c9be766
- [parentSpanId:OpenCensus\Trace\Span:private] => 60c9a7b2
- [name:OpenCensus\Trace\Span:private] => fib
- [startTime:OpenCensus\Trace\Span:private] =>
- DateTime Object
- (
- [date] => 2018-03-22 19:47:00.788845
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [endTime:OpenCensus\Trace\Span:private] =>
- DateTime Object
- (
- [date] => 2018-03-22 19:47:00.788921
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [stackTrace:OpenCensus\Trace\Span:private] => Array
- (
- [0] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 21
- [function] => fib
- )
-
- [1] => Array
- (
- [function] => {closure}
- )
-
- [2] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/opencensus/
- opencensus/src/Trace/Tracer/ContextTracer.php
- [line] => 66
- [function] => call_user_func_array
- )
-
- [3] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 33
- [function] => fib
- )
-
- [4] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/symfony/http-
- kernel/HttpKernel.php
- [line] => 151
- [function] => {closure}
- )
-
- [5] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/symfony/http-
- kernel/HttpKernel.php
- [line] => 68
- [function] => handleRaw
- [class] => Symfony\Component\HttpKernel\HttpKernel
- [type] => ->
- )
-
- [6] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/silex/silex/src/
- Silex/Application.php
- [line] => 496
- [function] => handle
- [class] => Symfony\Component\HttpKernel\HttpKernel
- [type] => ->
- )
-
- [7] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/silex/silex/src/
- Silex/Application.php
- [line] => 477
- [function] => handle
- [class] => Silex\Application
- [type] => ->
- )
-
- [8] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 37
- [function] => run
- [class] => Silex\Application
- [type] => ->
- )
-
- )
-
- [timeEvents:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [links:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [status:OpenCensus\Trace\Span:private] =>
- [sameProcessAsParentSpan:OpenCensus\Trace\Span:private] =>
- [attributes:OpenCensus\Trace\Span:private] => Array
- (
- [n] => 2
- )
-
- )
-
- [3] => OpenCensus\Trace\Span Object
- (
- [traceId:OpenCensus\Trace\Span:private] =>
- [spanId:OpenCensus\Trace\Span:private] => 4241b61
- [parentSpanId:OpenCensus\Trace\Span:private] => 60c9a7b2
- [name:OpenCensus\Trace\Span:private] => fib
- [startTime:OpenCensus\Trace\Span:private] => DateTime
- Object
- (
- [date] => 2018-03-22 19:47:00.788978
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [endTime:OpenCensus\Trace\Span:private] => DateTime
- Object
- (
- [date] => 2018-03-22 19:47:00.789041
- [timezone_type] => 3
- [timezone] => UTC
- )
-
- [stackTrace:OpenCensus\Trace\Span:private] => Array
- (
- [0] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 21
- [function] => fib
- )
-
- [1] => Array
- (
- [function] => {closure}
- )
-
- [2] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/opencensus/
- opencensus/src/Trace/Tracer/ContextTracer.php
- [line] => 66
- [function] => call_user_func_array
- )
-
- [3] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 33
- [function] => fib
- )
-
- [4] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/symfony/http-
- kernel/HttpKernel.php
- [line] => 151
- [function] => {closure}
- )
-
- [5] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/symfony/http-
- kernel/HttpKernel.php
- [line] => 68
- [function] => handleRaw
- [class] => Symfony\Component\HttpKernel\HttpKernel
- [type] => ->
- )
-
- [6] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/silex/silex/src/
- Silex/Application.php
- [line] => 496
- [function] => handle
- [class] => Symfony\Component\HttpKernel\HttpKernel
- [type] => ->
- )
-
- [7] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/vendor/silex/silex/src/
- Silex/Application.php
- [line] => 477
- [function] => handle
- [class] => Silex\Application
- [type] => ->
- )
-
- [8] => Array
- (
- [file] => /Users/chingor/php/opencensus-
- php/examples/silex/web/index.php
- [line] => 37
- [function] => run
- [class] => Silex\Application
- [type] => ->
- )
-
- )
-
- [timeEvents:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [links:OpenCensus\Trace\Span:private] => Array
- (
- )
-
- [status:OpenCensus\Trace\Span:private] =>
- [sameProcessAsParentSpan:OpenCensus\Trace\Span:private]
- =>
- [attributes:OpenCensus\Trace\Span:private] => Array
- (
- [n] => 1
- )
-
- )
-
-)
-```
diff --git a/content/python.md b/content/python.md
deleted file mode 100644
index 496f0bc3..00000000
--- a/content/python.md
+++ /dev/null
@@ -1,102 +0,0 @@
-+++
-Description = "python"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "leftnav"
-title = "Python"
-date = "2018-05-18T13:08:20-05:00"
-+++
-
-The example demonstrates how to record traces for a simple Flask web application.
-
----
-
-#### API Documentation
-
-The OpenCensus libraries artifacts are released to [PyPI](https://pypi.python.org/pypi/opencensus). The API documentation is available [here](https://census-instrumentation.github.io/opencensus-python/trace/api/index.html).
-
----
-
-#### The Quickstart Example Code
-
-1. Clone the OpenCensus Python GitHub repo:
-```
-git clone https://github.com/census-instrumentation/opencensus-python.git
-```
-
-2. Code is in directory:
-```
-examples/trace/helloworld/flask
-```
-
----
-
-#### To Run The Example
-
-1. Install dependencies via pip:
-``` python
-$ pip install opencensus
-```
-
-2. The OpenCensus Python Quickstart example can be run as below:
-``` python
-$ python simple.py
-```
-
-3. Make a HTTP request to hit the application:
-``` python
-$ curl http://localhost:8080/
-```
-
----
-
-#### The Example Code
-``` python
-import flask
-from opencensus.trace.exporters import stackdriver_exporter
-from opencensus.trace.ext.flask.flask_middleware import FlaskMiddleware
-
-
-app = flask.Flask(__name__)
-
-# Enable tracing the requests
-middleware = FlaskMiddleware(app)
-
-
-@app.route('/')
-def hello():
- return 'Hello world!'
-
-
-if __name__ == '__main__':
- app.run(host='localhost', port=8080)
-```
-
----
-
-#### The Example Output (Raw)
-```
-[
- SpanData(
- name='[GET]http://localhost:8080/',
- context=,
- span_id='0eadaaabacea4ca7',
- parent_span_id=None,
- attributes={
- '/http/method': 'GET',
- '/http/url': 'http://localhost:8080/',
- '/http/status_code': '200
- },
- start_time='2018-04-09T23:04:41.784921Z',
- end_time='2018-04-09T23:04:41.785165Z',
- child_span_count=0,
- stack_trace=None,
- time_events=[],
- links=[],
- status=None,
- same_process_as_parent_span=None,
- span_kind=0
- )
-]
-```
diff --git a/content/quickstart/_index.md b/content/quickstart/_index.md
new file mode 100644
index 00000000..e3e0be53
--- /dev/null
+++ b/content/quickstart/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Quickstart"
+date: 2018-07-16T14:28:57-07:00
+draft: false
+weight: 20
+---
+
+By completing this quickstart, you will learn how to:
+
+* Collect [metrics](/core-concepts/metrics) from your services
+* [Trace](/core-concepts/tracing) a request as it passes through your services
+* [Export](/core-concepts/exporters) your data to one of our [supported backends](/supported-exporters/)
+
+{{}}
diff --git a/content/quickstart/go/_index.md b/content/quickstart/go/_index.md
new file mode 100644
index 00000000..edd0d4b5
--- /dev/null
+++ b/content/quickstart/go/_index.md
@@ -0,0 +1,18 @@
+---
+title: "Go"
+date: 2018-07-16T14:29:03-07:00
+draft: false
+class: "resized-logo"
+---
+
+
+
+In this quickstart, using OpenCensus Go, you will gain hands-on experience with:
+{{% children %}}
+
+For full API references, please take a look at:
+
+Resource|Link
+---|---
+GoDoc|https://godoc.org/go.opencensus.io
+Github repository|https://github.com/census-instrumentation/opencensus-go
diff --git a/content/quickstart/go/metrics.md b/content/quickstart/go/metrics.md
new file mode 100644
index 00000000..25367450
--- /dev/null
+++ b/content/quickstart/go/metrics.md
@@ -0,0 +1,1572 @@
+---
+title: "Metrics"
+date: 2018-07-16T14:29:10-07:00
+draft: false
+class: "shadowed-image lightbox"
+---
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+#### Table of contents
+
+- [Requirements](#background)
+- [Installation](#installation)
+- [Brief Overview](#brief-overview)
+- [Getting started](#getting-started)
+- [Enable Metrics](#enable-metrics)
+ - [Import Packages](#import-metrics-packages)
+ - [Create Metrics](#create-metrics)
+ - [Create Tags](#create-tags)
+ - [Inserting Tags](#inserting-tags)
+ - [Recording Metrics](#recording-metrics)
+- [Enable Views](#enable-views)
+ - [Import Packages](#import-views-packages)
+ - [Create Views](#create-views)
+ - [Register Views](#register-views)
+- [Exporting to Stackdriver](#exporting-to-stackdriver)
+ - [Import Packages](#import-exporting-packages)
+ - [Export Views](#export-views)
+- [Viewing your Metrics on Stackdriver](#viewing-your-metrics-on-stackdriver)
+
+In this quickstart, we’ll learn gleam insights into a segment of code and learn how to:
+
+1. Collect metrics using [OpenCensus Metrics](/core-concepts/metrics) and [Tags](/core-concepts/tags)
+2. Register and enable an exporter for a [backend](http://localhost:1313/core-concepts/exporters/#supported-backends) of our choice
+3. View the metrics on the backend of our choice
+
+#### Requirements
+- Go1.9 and above
+- Google Cloud Platform account anproject
+- Google Stackdriver Tracing enabled on your project (Need help? [Click here](/codelabs/stackdriver))
+
+#### Installation
+
+OpenCensus: `go get go.opencensus.io/*`
+
+Stackdriver exporter: `go get contrib.go.opencensus.io/exporter/stackdriver`
+
+#### Brief Overview
+By the end of this tutorial, we will do these four things to obtain metrics using OpenCensus:
+
+1. Create quantifiable metrics (numerical) that we will record
+2. Create [tags](/core-concepts/tags) that we will associate with our metrics
+3. Organize our metrics, similar to a writing a report, in to a `View`
+4. Export our views to a backend (Stackdriver in this case)
+
+
+#### Getting Started
+
+{{% notice note %}}
+Unsure how to write and execute Go code? [Click here](https://golang.org/doc/code.html).
+{{% /notice %}}
+
+We will be a simple "read-evaluate-print" (REPL) app. In there we'll collect some metrics to observe the work that is going on this code, such as:
+
+- Latency per processing loop
+- Number of lines read
+- Number of errors
+- Line lengths
+
+First, create a file called `repl.go`.
+```bash
+touch repl.go
+```
+
+Next, put the following code inside of `repl.go`:
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "log"
+ "os"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ return bytes.ToUpper(in), nil
+}
+{{}}
+
+You can run the code via `go run repl.go`.
+
+#### Enable Metrics
+
+
+##### Import Packages
+
+To enable metrics, we’ll import a number of core and OpenCensus packages
+
+{{}}
+{{}}
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+
+##### Create Metrics
+
+First, we will create the variables needed to later record our metrics.
+
+{{}}
+{{}}
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Create Tags
+
+Now we will create the variable later needed to add extra text meta-data to our metrics.
+
+{{}}
+{{}}
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+We will later use this tag, called KeyMethod, to record what method is being invoked. In our scenario, we will only use it to record that "repl" is calling our data.
+
+Again, this is arbitrary and purely up the user. For example, if we wanted to track what operating system a user is using, we could do so like this:
+```go
+osKey, _ := tag.NewKey("operating_system")
+```
+
+Later, when we use osKey, we will be given an opportunity to enter values such as "windows" or "mac".
+
+##### Inserting Tags
+Now we will insert a specific tag called "repl". It will give us a new `context.Context ctx` which we will use while we later record our metrics. This `ctx` has all tags that have previously been inserted, thus allowing for context propagation.
+
+{{}}
+{{}}
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+When recording metrics, we will need the `ctx` from `tag.New`. We will be recording metrics in `processLine`, so let's go ahead and make `ctx` available now.
+
+{{}}
+{{}}
+// ...
+out, err := processLine(ctx, line)
+
+// ...
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Recording Metrics
+
+Now we will record the desired metrics. To do so, we will use `stats.Record` and pass in our `ctx` and [previously instantiated metrics variables](#create-metrics).
+
+{{}}
+{{}}
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+#### Enable Views
+We will be adding the View package: `"go.opencensus.io/stats/view"`
+
+
+##### Import Packages
+{{}}
+{{}}
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Create Views
+We now determine how our metrics will be organized by creating `Views`.
+
+{{}}
+{{}}
+var (
+ LatencyView = &view.View{
+ Name: "demo/latency",
+ Measure: MLatencyMs,
+ Description: "The distribution of the latencies",
+
+ // Latency in buckets:
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ Aggregation: view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000),
+ TagKeys: []tag.Key{KeyMethod}}
+
+ LineCountView = &view.View{
+ Name: "demo/lines_in",
+ Measure: MLinesIn,
+ Description: "The number of lines from standard input",
+ Aggregation: view.Count(),
+ }
+
+ ErrorCountView = &view.View{
+ Name: "demo/errors",
+ Measure: MErrors,
+ Description: "The number of errors encountered",
+ Aggregation: view.Count(),
+ }
+
+ LineLengthView = &view.View{
+ Description: "Groups the lengths of keys in buckets",
+ Measure: MLineLengths,
+ // Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
+ Aggregation: view.Distribution(0, 5, 10, 15, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000),
+ }
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+var (
+ LatencyView = &view.View{
+ Name: "demo/latency",
+ Measure: MLatencyMs,
+ Description: "The distribution of the latencies",
+
+ // Latency in buckets:
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ Aggregation: view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000),
+ TagKeys: []tag.Key{KeyMethod}}
+
+ LineCountView = &view.View{
+ Name: "demo/lines_in",
+ Measure: MLinesIn,
+ Description: "The number of lines from standard input",
+ Aggregation: view.Count(),
+ }
+
+ ErrorCountView = &view.View{
+ Name: "demo/errors",
+ Measure: MErrors,
+ Description: "The number of errors encountered",
+ Aggregation: view.Count(),
+ }
+
+ LineLengthView = &view.View{
+ Description: "Groups the lengths of keys in buckets",
+ Measure: MLineLengths,
+ // Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
+ Aggregation: view.Distribution(0, 5, 10, 15, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000),
+ }
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Register Views
+We now register the views and set the reporting period.
+
+{{}}
+{{}}
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // Register the views
+ if err := view.Register(LatencyView, LineCountView, ErrorCountView, LineLengthView); err != nil {
+ log.Fatalf("Failed to register views: %v", err)
+ }
+
+ // But also we can change the metrics reporting period to 2 seconds
+ view.SetReportingPeriod(2 * time.Second)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+var (
+ LatencyView = &view.View{
+ Name: "demo/latency",
+ Measure: MLatencyMs,
+ Description: "The distribution of the latencies",
+
+ // Latency in buckets:
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ Aggregation: view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000),
+ TagKeys: []tag.Key{KeyMethod}}
+
+ LineCountView = &view.View{
+ Name: "demo/lines_in",
+ Measure: MLinesIn,
+ Description: "The number of lines from standard input",
+ Aggregation: view.Count(),
+ }
+
+ ErrorCountView = &view.View{
+ Name: "demo/errors",
+ Measure: MErrors,
+ Description: "The number of errors encountered",
+ Aggregation: view.Count(),
+ }
+
+ LineLengthView = &view.View{
+ Description: "Groups the lengths of keys in buckets",
+ Measure: MLineLengths,
+ // Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
+ Aggregation: view.Distribution(0, 5, 10, 15, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000),
+ }
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // Register the views
+ if err := view.Register(LatencyView, LineCountView, ErrorCountView, LineLengthView); err != nil {
+ log.Fatalf("Failed to register views: %v", err)
+ }
+
+ // But also we can change the metrics reporting period to 2 seconds
+ view.SetReportingPeriod(2 * time.Second)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+#### Exporting to Stackdriver
+
+
+##### Import Packages
+We will be adding the Stackdriver package: `"contrib.go.opencensus.io/exporter/stackdriver"`
+
+{{}}
+{{}}
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+var (
+ LatencyView = &view.View{
+ Name: "demo/latency",
+ Measure: MLatencyMs,
+ Description: "The distribution of the latencies",
+
+ // Latency in buckets:
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ Aggregation: view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000),
+ TagKeys: []tag.Key{KeyMethod}}
+
+ LineCountView = &view.View{
+ Name: "demo/lines_in",
+ Measure: MLinesIn,
+ Description: "The number of lines from standard input",
+ Aggregation: view.Count(),
+ }
+
+ ErrorCountView = &view.View{
+ Name: "demo/errors",
+ Measure: MErrors,
+ Description: "The number of errors encountered",
+ Aggregation: view.Count(),
+ }
+
+ LineLengthView = &view.View{
+ Description: "Groups the lengths of keys in buckets",
+ Measure: MLineLengths,
+ // Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
+ Aggregation: view.Distribution(0, 5, 10, 15, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000),
+ }
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // Register the views
+ if err := view.Register(LatencyView, LineCountView, ErrorCountView, LineLengthView); err != nil {
+ log.Fatalf("Failed to register views: %v", err)
+ }
+
+ // But also we can change the metrics reporting period to 2 seconds
+ view.SetReportingPeriod(2 * time.Second)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Export Views
+In our `main` function, first we create the Stackdriver exporter:
+```go
+// Create that Stackdriver stats exporter
+sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: os.Getenv("GCP_PROJECT_ID"),
+ MetricPrefix: os.Getenv("GCP_METRIC_PREFIX"),
+})
+if err != nil {
+ log.Fatalf("Failed to create the Stackdriver stats exporter: %v", err)
+}
+defer sd.Flush()
+```
+
+Then we register the views with Stackdriver:
+```go
+// Register the stats exporter
+view.RegisterExporter(sd)
+```
+
+{{}}
+{{}}
+func main() {
+ // Create that Stackdriver stats exporter
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: os.Getenv("GCP_PROJECT_ID"),
+ MetricPrefix: os.Getenv("GCP_METRIC_PREFIX"),
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver stats exporter: %v", err)
+ }
+ defer sd.Flush()
+
+ // Register the stats exporter
+ view.RegisterExporter(sd)
+
+ // Register the views
+ if err := view.Register(LatencyView, LineCountView, ErrorCountView, LineLengthView); err != nil {
+ log.Fatalf("Failed to register views: %v", err)
+ }
+
+ // But also we can change the metrics reporting period to 2 seconds
+ view.SetReportingPeriod(2 * time.Second)
+
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "time"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/stats"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/tag"
+)
+
+var (
+ // The latency in milliseconds
+ MLatencyMs = stats.Float64("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+ // Counts the number of lines read in from standard input
+ MLinesIn = stats.Int64("repl/lines_in", "The number of lines read in", "1")
+
+ // Encounters the number of non EOF(end-of-file) errors.
+ MErrors = stats.Int64("repl/errors", "The number of errors encountered", "1")
+
+ // Counts/groups the lengths of lines read in.
+ MLineLengths = stats.Int64("repl/line_lengths", "The distribution of line lengths", "By")
+)
+
+var (
+ KeyMethod, _ = tag.NewKey("method")
+)
+
+var (
+ LatencyView = &view.View{
+ Name: "demo/latency",
+ Measure: MLatencyMs,
+ Description: "The distribution of the latencies",
+
+ // Latency in buckets:
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ Aggregation: view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000),
+ TagKeys: []tag.Key{KeyMethod}}
+
+ LineCountView = &view.View{
+ Name: "demo/lines_in",
+ Measure: MLinesIn,
+ Description: "The number of lines from standard input",
+ Aggregation: view.Count(),
+ }
+
+ ErrorCountView = &view.View{
+ Name: "demo/errors",
+ Measure: MErrors,
+ Description: "The number of errors encountered",
+ Aggregation: view.Count(),
+ }
+
+ LineLengthView = &view.View{
+ Description: "Groups the lengths of keys in buckets",
+ Measure: MLineLengths,
+ // Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
+ Aggregation: view.Distribution(0, 5, 10, 15, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000),
+ }
+)
+
+func main() {
+ // Create that Stackdriver stats exporter
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: os.Getenv("GCP_PROJECT_ID"),
+ MetricPrefix: os.Getenv("GCP_METRIC_PREFIX"),
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver stats exporter: %v", err)
+ }
+ defer sd.Flush()
+
+ // Register the stats exporter
+ view.RegisterExporter(sd)
+
+ // Register the views
+ if err := view.Register(LatencyView, LineCountView, ErrorCountView, LineLengthView); err != nil {
+ log.Fatalf("Failed to register views: %v", err)
+ }
+
+ // But also we can change the metrics reporting period to 2 seconds
+ view.SetReportingPeriod(2 * time.Second)
+
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, err := tag.New(context.Background(), tag.Insert(KeyMethod, "repl"))
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("> ")
+ line, _, err := br.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ stats.Record(ctx, MErrors.M(1))
+ }
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ stats.Record(ctx, MErrors.M(1))
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ startTime := time.Now()
+ defer func() {
+ ms := float64(time.Since(startTime).Nanoseconds()) / 1e6
+ stats.Record(ctx, MLinesIn.M(1), MLatencyMs.M(ms), MLineLengths.M(int64(len(in))))
+ }()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+#### Viewing your Metrics on Stackdriver
+With the above you should now be able to navigate to the [Google Cloud Platform console](https://app.google.stackdriver.com/metrics-explorer), select your project, and view the metrics.
+
+In the query box to find metrics, type `quickstart` as a prefix:
+
+
+
+And on selecting any of the metrics e.g. `quickstart/demo/lines_in`, we’ll get...
+
+
+
+Let’s examine the latency buckets:
+
+
+
+On checking out the Stacked Area display of the latency, we can see that the 99th percentile latency was 24.75ms. And, for `line_lengths`:
+
+
diff --git a/content/quickstart/go/tracing.md b/content/quickstart/go/tracing.md
new file mode 100644
index 00000000..5c5a9b45
--- /dev/null
+++ b/content/quickstart/go/tracing.md
@@ -0,0 +1,729 @@
+---
+title: "Tracing"
+date: 2018-07-16T14:29:06-07:00
+draft: false
+class: "shadowed-image lightbox"
+---
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+#### Table of contents
+
+- [Requirements](#background)
+- [Installation](#installation)
+- [Getting started](#getting-started)
+- [Enable Tracing](#enable-tracing)
+ - [Import Packages](#import-tracing-packages)
+ - [Instrumentation](#instrument-tracing)
+- [Exporting to Stackdriver](#exporting-to-stackdriver)
+ - [Import Packages](#import-exporting-packages)
+ - [Export Traces](#export-traces)
+ - [Create Annotations](#create-annotations)
+- [Viewing your Traces on Stackdriver](#viewing-your-traces-on-stackdriver)
+
+In this quickstart, we’ll learn gleam insights into a segment of code and learn how to:
+
+1. Trace the code using [OpenCensus Tracing](/core-concepts/tracing)
+2. Register and enable an exporter for a [backend](/core-concepts/exporters/#supported-backends) of our choice
+3. View traces on the backend of our choice
+
+#### Requirements
+- Go1.9 and above
+- Google Cloud Platform account anproject
+- Google Stackdriver Tracing enabled on your project (Need help? [Click here](/codelabs/stackdriver))
+
+#### Installation
+
+OpenCensus: `go get go.opencensus.io/*`
+
+Stackdriver exporter: `go get contrib.go.opencensus.io/exporter/stackdriver`
+
+#### Getting Started
+
+{{% notice note %}}
+Unsure how to write and execute Go code? [Click here](https://golang.org/doc/code.html).
+{{% /notice %}}
+
+It would be nice if we could trace the following code, thus giving us observability in to how the code functions.
+
+First, create a file called `repl.go`.
+```bash
+touch repl.go
+```
+
+Next, put the following code inside of `repl.go`:
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "log"
+ "os"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+
+ line, err := readLine(br)
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(br *bufio.Reader) ([]byte, error) {
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return nil, err
+ }
+
+ return line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+
+You can run the code via `go run repl.go`.
+
+#### Enable Tracing
+
+
+##### Import Packages
+
+To enable tracing, we’ll import the context package (`context`) as well as the OpenCensus Trace package (`go.opencensus.io/trace`). Your import statement will look like this:
+
+{{}}
+{{}}
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "go.opencensus.io/trace"
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+
+ line, err := readLine(br)
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(br *bufio.Reader) ([]byte, error) {
+ line, _, err := br.ReadLine()
+ if err != nil {
+ return nil, err
+ }
+
+ return line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(in []byte) (out []byte, err error) {
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+
+##### Instrumentation
+
+We will be tracing the execution as it starts in `readEvaluateProcess`, goes to `readLine`, and finally travels through `processLine`.
+
+To accomplish this, we must do two things:
+
+**1. Create a span in each of the three functions**
+
+You can create a span by inserting the following two lines in each of the three functions:
+```go
+ctx, span := trace.StartSpan(context.Context ctx, "spanName")
+defer span.End()
+```
+
+**2. Provide `context.Context ctx` to all spans**
+
+In order to trace each span, we will provide the **ctx returned from the first `StartSpan` function to all future `StartSpan` functions**.
+
+This means that we will modify the `readLine` and `processLine` functions so they accept a `context.Context ctx` argument.
+
+
+{{}}
+{{}}
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, span := trace.StartSpan(context.Background(), "repl")
+ defer span.End()
+
+ fmt.Printf("> ")
+
+ _, line, err := readLine(ctx, br)
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(ctx context.Context, br *bufio.Reader) (context.Context, []byte, error) {
+ ctx, span := trace.StartSpan(ctx, "readLine")
+ defer span.End()
+
+ line, _, err := br.ReadLine()
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return ctx, nil, err
+ }
+
+ return ctx, line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ _, span := trace.StartSpan(ctx, "processLine")
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, span := trace.StartSpan(context.Background(), "repl")
+ defer span.End()
+
+ fmt.Printf("> ")
+
+ _, line, err := readLine(ctx, br)
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(ctx context.Context, br *bufio.Reader) (context.Context, []byte, error) {
+ ctx, span := trace.StartSpan(ctx, "readLine")
+ defer span.End()
+
+ line, _, err := br.ReadLine()
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return ctx, nil, err
+ }
+
+ return ctx, line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ _, span := trace.StartSpan(ctx, "processLine")
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+When creating a new span with `trace.StartSpan(context.Context, "spanName")`, the package first checks if a parent Span already exists in the `context.Context` argument. If it exists, a child span is created. Otherwise, a newly created span is inserted in to `context` to become the parent Span so that subsequent reuse of `context.Context` will have that span.
+
+#### Exporting to Stackdriver
+
+
+##### Import Packages
+To turn on Stackdriver Tracing, we’ll need to import the Stackdriver exporter from `contrib.go.opencensus.io/exporter/stackdriver`.
+
+{{}}
+{{}}
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/trace"
+)
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, span := trace.StartSpan(context.Background(), "repl")
+ defer span.End()
+
+ fmt.Printf("> ")
+
+ _, line, err := readLine(ctx, br)
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(ctx context.Context, br *bufio.Reader) (context.Context, []byte, error) {
+ ctx, span := trace.StartSpan(ctx, "readLine")
+ defer span.End()
+
+ line, _, err := br.ReadLine()
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return ctx, nil, err
+ }
+
+ return ctx, line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ _, span := trace.StartSpan(ctx, "processLine")
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Export Traces
+To get our code ready to export, we will be adding a few lines of code to our `main` function.
+
+1. We want to make our traces export to Stackdriver
+```go
+stackdriver.NewExporter
+trace.RegisterExporter
+```
+
+2. We want to trace a large percentage of executions (this is called [sampling](/core-concepts/tracing/#sampling))
+```go
+stackdriver.ApplyConfig
+```
+
+Now, let's look at what our `main` function will look like:
+{{}}
+{{}}
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // Enable the Stackdriver Tracing exporter
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: os.Getenv("GCP_PROJECTID"),
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+ }
+ defer sd.Flush()
+
+ // Register/enable the trace exporter
+ trace.RegisterExporter(sd)
+
+ // For demo purposes, set the trace sampling probability to be high
+ trace.ApplyConfig(trace.Config{DefaultSampler: trace.ProbabilitySampler(1.0)})
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // Enable the Stackdriver Tracing exporter
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: os.Getenv("GCP_PROJECTID"),
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+ }
+ defer sd.Flush()
+
+ // Register/enable the trace exporter
+ trace.RegisterExporter(sd)
+
+ // For demo purposes, set the trace sampling probability to be high
+ trace.ApplyConfig(trace.Config{DefaultSampler: trace.ProbabilitySampler(1.0)})
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ ctx, span := trace.StartSpan(context.Background(), "repl")
+ defer span.End()
+
+ fmt.Printf("> ")
+
+ _, line, err := readLine(ctx, br)
+ if err != nil {
+ return err
+ }
+
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(ctx context.Context, br *bufio.Reader) (context.Context, []byte, error) {
+ ctx, span := trace.StartSpan(ctx, "readLine")
+ defer span.End()
+
+ line, _, err := br.ReadLine()
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return ctx, nil, err
+ }
+
+ return ctx, line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ _, span := trace.StartSpan(ctx, "processLine")
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+##### Create Annotations
+When looking at our traces on a backend (such as Stackdriver), we can add metadata to our traces to increase our post-mortem insight.
+
+Let's record the length of each requested string so that it is available to view when we are looking at our traces. We can do this by annotating our `readEvaluateProcess` function.
+
+{{}}
+{{}}
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ // Not timing from: prompt to when we read a
+ // line, because you can infinitely wait on stdin.
+ line, _, err := br.ReadLine()
+
+ ctx, span := trace.StartSpan(context.Background(), "repl")
+ defer span.End()
+
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return err
+ }
+
+ span.Annotate([]trace.Attribute{
+ trace.Int64Attribute("len", int64(len(line))),
+ trace.StringAttribute("use", "repl"),
+ }, "Invoking processLine")
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "log"
+ "os"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ // In a REPL:
+ // 1. Read input
+ // 2. process input
+ br := bufio.NewReader(os.Stdin)
+
+ // Enable the Stackdriver Tracing exporter
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: os.Getenv("GCP_PROJECTID"),
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+ }
+ defer sd.Flush()
+
+ // Register/enable the trace exporter
+ trace.RegisterExporter(sd)
+
+ // For demo purposes, set the trace sampling probability to be high
+ trace.ApplyConfig(trace.Config{DefaultSampler: trace.ProbabilitySampler(1.0)})
+
+ // repl is the read, evaluate, print, loop
+ for {
+ if err := readEvaluateProcess(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ log.Fatal(err)
+ }
+ }
+}
+
+// readEvaluateProcess reads a line from the input reader and
+// then processes it. It returns an error if any was encountered.
+func readEvaluateProcess(br *bufio.Reader) error {
+ fmt.Printf("> ")
+ // Not timing from: prompt to when we read a
+ // line, because you can infinitely wait on stdin.
+ line, _, err := br.ReadLine()
+
+ ctx, span := trace.StartSpan(context.Background(), "repl")
+ defer span.End()
+
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return err
+ }
+
+ span.Annotate([]trace.Attribute{
+ trace.Int64Attribute("len", int64(len(line))),
+ trace.StringAttribute("use", "repl"),
+ }, "Invoking processLine")
+ out, err := processLine(ctx, line)
+ if err != nil {
+ return err
+ }
+ fmt.Printf("< %s\n\n", out)
+ return nil
+}
+
+func readLine(ctx context.Context, br *bufio.Reader) (context.Context, []byte, error) {
+ ctx, span := trace.StartSpan(ctx, "readLine")
+ defer span.End()
+
+ line, _, err := br.ReadLine()
+ if err != nil {
+ span.SetStatus(trace.Status{Code: trace.StatusCodeUnknown, Message: err.Error()})
+ return ctx, nil, err
+ }
+
+ return ctx, line, err
+}
+
+// processLine takes in a line of text and
+// transforms it. Currently it just capitalizes it.
+func processLine(ctx context.Context, in []byte) (out []byte, err error) {
+ _, span := trace.StartSpan(ctx, "processLine")
+ defer span.End()
+
+ return bytes.ToUpper(in), nil
+}
+{{}}
+{{}}
+
+#### Viewing your Traces on Stackdriver
+With the above you should now be able to navigate to the [Google Cloud Platform console](https://console.cloud.google.com/traces/traces), select your project, and view the traces.
+
+
+
+And on clicking on one of the traces, we should be able to see the annotation whose description `isInvoking processLine` and on clicking on it, it should show our attributes `len` and `use`.
+
+
diff --git a/content/quickstart/java/_index.md b/content/quickstart/java/_index.md
new file mode 100644
index 00000000..72f7011b
--- /dev/null
+++ b/content/quickstart/java/_index.md
@@ -0,0 +1,18 @@
+---
+title: "Java"
+date: 2018-07-16T14:29:15-07:00
+draft: false
+class: "resized-logo"
+---
+
+
+
+In this quickstart, using OpenCensus Java, you will gain hands-on experience with:
+{{% children %}}
+
+For full API references, please take a look at:
+
+Resource|Link
+---|---
+JavaDoc|https://javadoc.io/doc/io.opencensus/opencensus-api/
+Github repository|https://github.com/census-instrumentation/opencensus-java
diff --git a/content/quickstart/java/metrics.md b/content/quickstart/java/metrics.md
new file mode 100644
index 00000000..fa111166
--- /dev/null
+++ b/content/quickstart/java/metrics.md
@@ -0,0 +1,1903 @@
+---
+title: "Metrics"
+date: 2018-07-16T14:29:27-07:00
+draft: false
+class: "shadowed-image lightbox"
+---
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+#### Table of contents
+
+- [Requirements](#background)
+- [Installation](#installation)
+- [Brief Overview](#brief-overview)
+- [Getting started](#getting-started)
+- [Enable Metrics](#enable-metrics)
+ - [Import Packages](#import-metrics-packages)
+ - [Create Metrics](#create-metrics)
+ - [Create Tags](#create-tags)
+ - [Inserting Tags](#inserting-tags)
+ - [Recording Metrics](#recording-metrics)
+- [Enable Views](#enable-views)
+ - [Import Packages](#import-views-packages)
+ - [Create Views](#create-views)
+ - [Register Views](#register-views)
+- [Exporting to Stackdriver](#exporting-to-stackdriver)
+ - [Import Packages](#import-exporting-packages)
+ - [Export Views](#export-views)
+- [Viewing your Metrics on Stackdriver](#viewing-your-metrics-on-stackdriver)
+
+In this quickstart, we’ll learn gleam insights into a segment of code and learn how to:
+
+1. Collect metrics using [OpenCensus Metrics](/core-concepts/metrics) and [Tags](/core-concepts/tags)
+2. Register and enable an exporter for a [backend](http://localhost:1313/core-concepts/exporters/#supported-backends) of our choice
+3. View the metrics on the backend of our choice
+
+#### Requirements
+- Java 8+
+- Google Cloud Platform account anproject
+- Google Stackdriver Tracing enabled on your project (Need help? [Click here](/codelabs/stackdriver))
+
+#### Installation
+```bash
+mvn archetype:generate \
+ -DgroupId=io.opencensus.quickstart \
+ -DartifactId=repl-app \
+ -DarchetypeArtifactId=maven-archetype-quickstart \
+ -DinteractiveMode=false \
+
+cd repl-app/src/main/java/io/opencensus/quickstart
+
+mv App.Java Repl.java
+```
+Put this in your newly generated `pom.xml` file:
+
+```xml
+
+ 4.0.0
+ io.opencensus.quickstart
+ quickstart
+ jar
+ 1.0-SNAPSHOT
+ quickstart
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.14.0
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ Repl
+ io.opencensus.quickstart.Repl
+
+
+
+
+
+
+
+
+
+
+```
+
+Put this in `src/main/java/io/opencensus/quickstart/Repl.java`:
+
+```java
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+public class Repl {
+ public static void main(String ...args) {
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ System.out.print("> ");
+ System.out.flush();
+ String line = stdin.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+}
+```
+
+Install required dependencies:
+```bash
+mvn install
+```
+
+#### Brief Overview
+By the end of this tutorial, we will do these four things to obtain metrics using OpenCensus:
+
+1. Create quantifiable metrics (numerical) that we will record
+2. Create [tags](/core-concepts/tags) that we will associate with our metrics
+3. Organize our metrics, similar to a writing a report, in to a `View`
+4. Export our views to a backend (Stackdriver in this case)
+
+#### Getting Started
+We will be creating a simple "read-evaluate-print" (REPL) app. Let's collect some metrics to observe the work that is going on this code, such as:
+
+- Latency per processing loop
+- Number of lines read
+- Number of errors
+- Line lengths
+
+Let's first run the application and see what we have.
+```bash
+mvn exec:java -Dexec.mainClass=io.opencensus.quickstart.Repl
+```
+You should see something like this:
+
+
+Now, in preparation of collecting metrics, lets abstract some of the core functionality in `main()` to a suite of helper functions:
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+public class Repl {
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+
+#### Enable Metrics
+
+##### Import Packages
+To enable metrics, we’ll declare the dependencies in your `pom.xml` file:
+
+{{}}
+{{}}
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+{{}}
+
+{{}}
+
+ 4.0.0
+ io.opencensus.quickstart
+ quickstart
+ jar
+ 1.0-SNAPSHOT
+ quickstart
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ Repl
+ io.opencensus.quickstart.Repl
+
+
+
+
+
+
+
+
+
+
+{{}}
+{{}}
+
+Now add the import statements to your `Repl.java`:
+{{}}
+{{}}
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+
+public class Repl {
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+{{}}
+
+##### Create Metrics
+First, we will create the variables needed to later record our metrics.
+
+{{}}
+{{}}
+// The latency in milliseconds
+private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+// Counts the number of lines read in from standard input.
+private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+// Counts the number of non EOF(end-of-file) errors.
+private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+// Counts/groups the lengths of lines read in.
+private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+private static final Tagger TAGGER = Tags.getTagger();
+private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+{{}}
+
+##### Create Tags
+Now we will create the variable later needed to add extra text meta-data to our metrics.
+
+{{}}
+{{}}
+// The tag "method"
+private static final TagKey KeyMethod = TagKey.create("method");
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+{{}}
+
+We will later use this tag, called KeyMethod, to record what method is being invoked. In our scenario, we will only use it to record that "repl" is calling our data.
+
+Again, this is arbitrary and purely up the user. For example, if we wanted to track what operating system a user is using, we could do so like this:
+```java
+private static final TagKey OSKey = TagKey.create("operating_system");
+```
+
+Later, when we use OSKey, we will be given an opportunity to enter values such as "windows" or "mac".
+
+We will now create helper functions to assist us with recording Tagged Stats.
+
+{{}}
+{{}}
+private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+}
+
+private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+{{}}
+
+##### Recording Metrics
+Finally, we'll hook our stat recorders in to `main`, `processLine`, and `readEvaluateProcess`:
+
+{{}}
+{{}}
+public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+}
+
+private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+}
+
+private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ try {
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ } catch(Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MLinesIn, new Long(1));
+ }
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ try {
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ } catch(Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MLinesIn, new Long(1));
+ }
+ }
+}
+{{}}
+{{}}
+
+#### Enable Views
+In order to examine these stats, we’ll need to export them to the backend of our choice for processing and aggregation.
+
+To do this, we need to define a mechanism for which the backend will process and aggregate those metrics and for this we define Views to categorically describe how we’ll examine the measures.
+
+
+##### Import Packages
+
+{{}}
+{{}}
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import io.opencensus.stats.Aggregation;
+import io.opencensus.stats.Aggregation.Distribution;
+import io.opencensus.stats.BucketBoundaries;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.View;
+import io.opencensus.stats.View.Name;
+import io.opencensus.stats.ViewManager;
+import io.opencensus.stats.View.AggregationWindow.Cumulative;
+import io.opencensus.tags.TagKey;
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+import io.opencensus.stats.Aggregation;
+import io.opencensus.stats.Aggregation.Distribution;
+import io.opencensus.stats.BucketBoundaries;
+import io.opencensus.stats.View;
+import io.opencensus.stats.View.Name;
+import io.opencensus.stats.ViewManager;
+import io.opencensus.stats.View.AggregationWindow.Cumulative;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ try {
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ } catch(Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MLinesIn, new Long(1));
+ }
+ }
+}
+{{}}
+{{}}
+
+##### Create Views
+We now determine how our metrics will be organized by creating `Views`.
+
+{{}}
+{{}}
+private static void registerAllViews() {
+ // Defining the distribution aggregations
+ Aggregation latencyDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ 0.0, 25.0, 50.0, 75.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0, 2000.0, 4000.0, 6000.0)
+ ));
+
+ Aggregation lengthsDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0B, >=5B, >=10B, >=20B, >=40B, >=60B, >=80B, >=100B, >=200B, >=400B, >=600B, >=800B, >=1000B]
+ 0.0, 5.0, 10.0, 20.0, 40.0, 60.0, 80.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0)
+ ));
+
+ // Define the count aggregation
+ Aggregation countAggregation = Aggregation.Count.create();
+
+ // So tagKeys
+ List noKeys = new ArrayList();
+
+ // Define the views
+ View[] views = new View[]{
+ View.create(Name.create(
+ "demo/latency", "The distribution of latencies", MLatencyMs, latencyDistribution, Collections.singletonList(KeyMethod))),
+ View.create(Name.create(
+ "demo/lines_in", "The number of lines read in from standard input", MLinesIn, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/errors", "The number of errors encountered", MErrors, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/line_length", "The distribution of line lengths", MLineLengths, lengthsDistribution, noKeys))
+ };
+
+ // Create the view manager
+ ViewManager vmgr = Stats.getViewManager();
+
+ // Then finally register the views
+ for (View view : views)
+ vmgr.registerView(view);
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+import io.opencensus.stats.Aggregation;
+import io.opencensus.stats.Aggregation.Distribution;
+import io.opencensus.stats.BucketBoundaries;
+import io.opencensus.stats.View;
+import io.opencensus.stats.View.Name;
+import io.opencensus.stats.ViewManager;
+import io.opencensus.stats.View.AggregationWindow.Cumulative;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ try {
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ } catch(Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MLinesIn, new Long(1));
+ }
+ }
+
+ private static void registerAllViews() {
+ // Defining the distribution aggregations
+ Aggregation latencyDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ 0.0, 25.0, 50.0, 75.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0, 2000.0, 4000.0, 6000.0)
+ ));
+
+ Aggregation lengthsDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0B, >=5B, >=10B, >=20B, >=40B, >=60B, >=80B, >=100B, >=200B, >=400B, >=600B, >=800B, >=1000B]
+ 0.0, 5.0, 10.0, 20.0, 40.0, 60.0, 80.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0)
+ ));
+
+ // Define the count aggregation
+ Aggregation countAggregation = Aggregation.Count.create();
+
+ // So tagKeys
+ List noKeys = new ArrayList();
+
+ // Define the views
+ View[] views = new View[]{
+ View.create(Name.create(
+ "demo/latency", "The distribution of latencies", MLatencyMs, latencyDistribution, Collections.singletonList(KeyMethod))),
+ View.create(Name.create(
+ "demo/lines_in", "The number of lines read in from standard input", MLinesIn, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/errors", "The number of errors encountered", MErrors, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/line_length", "The distribution of line lengths", MLineLengths, lengthsDistribution, noKeys))
+ };
+
+ // Create the view manager
+ ViewManager vmgr = Stats.getViewManager();
+
+ // Then finally register the views
+ for (View view : views)
+ vmgr.registerView(view);
+ }
+}
+{{}}
+{{}}
+
+##### Register Views
+We will create a function called `setupOpenCensusAndStackdriverExporter` and call it from our main function:
+
+{{}}
+{{}}
+public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Metrics.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+}
+
+private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ // Firstly register the views
+ registerAllViews();
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+import io.opencensus.stats.Aggregation;
+import io.opencensus.stats.Aggregation.Distribution;
+import io.opencensus.stats.BucketBoundaries;
+import io.opencensus.stats.View;
+import io.opencensus.stats.View.Name;
+import io.opencensus.stats.ViewManager;
+import io.opencensus.stats.View.AggregationWindow.Cumulative;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Metrics.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ try {
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ } catch(Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MLinesIn, new Long(1));
+ }
+ }
+
+ private static void registerAllViews() {
+ // Defining the distribution aggregations
+ Aggregation latencyDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ 0.0, 25.0, 50.0, 75.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0, 2000.0, 4000.0, 6000.0)
+ ));
+
+ Aggregation lengthsDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0B, >=5B, >=10B, >=20B, >=40B, >=60B, >=80B, >=100B, >=200B, >=400B, >=600B, >=800B, >=1000B]
+ 0.0, 5.0, 10.0, 20.0, 40.0, 60.0, 80.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0)
+ ));
+
+ // Define the count aggregation
+ Aggregation countAggregation = Aggregation.Count.create();
+
+ // So tagKeys
+ List noKeys = new ArrayList();
+
+ // Define the views
+ View[] views = new View[]{
+ View.create(Name.create(
+ "demo/latency", "The distribution of latencies", MLatencyMs, latencyDistribution, Collections.singletonList(KeyMethod))),
+ View.create(Name.create(
+ "demo/lines_in", "The number of lines read in from standard input", MLinesIn, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/errors", "The number of errors encountered", MErrors, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/line_length", "The distribution of line lengths", MLineLengths, lengthsDistribution, noKeys))
+ };
+
+ // Create the view manager
+ ViewManager vmgr = Stats.getViewManager();
+
+ // Then finally register the views
+ for (View view : views)
+ vmgr.registerView(view);
+ }
+
+ private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ // Firstly register the views
+ registerAllViews();
+ }
+}
+{{}}
+{{}}
+
+
+
+#### Exporting to Stackdriver
+
+##### Import Packages
+
+`pom.xml`
+{{}}
+{{}}
+
+ io.opencensus
+ opencensus-exporter-stats-stackdriver
+ ${opencensus.version}
+
+{{}}
+
+{{}}
+
+ 4.0.0
+ io.opencensus.quickstart
+ quickstart
+ jar
+ 1.0-SNAPSHOT
+ quickstart
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-exporter-stats-stackdriver
+ ${opencensus.version}
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ Repl
+ io.opencensus.quickstart.Repl
+
+
+
+
+
+
+
+
+
+
+{{}}
+{{}}
+
+`Repl.java`
+{{}}
+{{}}
+import java.io.IOException;
+
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.io.IOException;
+
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+import io.opencensus.stats.Aggregation;
+import io.opencensus.stats.Aggregation.Distribution;
+import io.opencensus.stats.BucketBoundaries;
+import io.opencensus.stats.View;
+import io.opencensus.stats.View.Name;
+import io.opencensus.stats.ViewManager;
+import io.opencensus.stats.View.AggregationWindow.Cumulative;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Metrics.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ STATSRECORDER.newMeasureMap().put(ml, n);
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ try {
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ } catch(Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MLinesIn, new Long(1));
+ }
+ }
+
+ private static void registerAllViews() {
+ // Defining the distribution aggregations
+ Aggregation latencyDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ 0.0, 25.0, 50.0, 75.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0, 2000.0, 4000.0, 6000.0)
+ ));
+
+ Aggregation lengthsDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0B, >=5B, >=10B, >=20B, >=40B, >=60B, >=80B, >=100B, >=200B, >=400B, >=600B, >=800B, >=1000B]
+ 0.0, 5.0, 10.0, 20.0, 40.0, 60.0, 80.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0)
+ ));
+
+ // Define the count aggregation
+ Aggregation countAggregation = Aggregation.Count.create();
+
+ // So tagKeys
+ List noKeys = new ArrayList();
+
+ // Define the views
+ View[] views = new View[]{
+ View.create(Name.create(
+ "demo/latency", "The distribution of latencies", MLatencyMs, latencyDistribution, Collections.singletonList(KeyMethod))),
+ View.create(Name.create(
+ "demo/lines_in", "The number of lines read in from standard input", MLinesIn, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/errors", "The number of errors encountered", MErrors, countAggregation, noKeys)),
+ View.create(Name.create(
+ "demo/line_length", "The distribution of line lengths", MLineLengths, lengthsDistribution, noKeys))
+ };
+
+ // Create the view manager
+ ViewManager vmgr = Stats.getViewManager();
+
+ // Then finally register the views
+ for (View view : views)
+ vmgr.registerView(view);
+ }
+
+ private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ // Firstly register the views
+ registerAllViews();
+ }
+}
+{{}}
+{{}}
+
+##### Export Views
+We will further expand upon `setupOpenCensusAndStackdriverExporter`:
+
+```java
+private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ // Firstly register the views
+ registerAllViews();
+
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ StackdriverStatsExporter.createAndRegister(
+ StackdriverStatsConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+}
+```
+
+Let's create the helper function `envOrAlternative` to assist with getting the Google Cloud Project ID:
+
+```java
+private static String envOrAlternative(String key, String ...alternatives) {
+ String value = System.getenv().get(key);
+ if (value != null && value != "")
+ return value;
+
+ // Otherwise now look for the alternatives.
+ for (String alternative : alternatives) {
+ if (alternative != null && alternative != "") {
+ value = alternative;
+ break;
+ }
+ }
+
+ return value;
+}
+```
+
+Here is the final state of the code:
+```java
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import io.opencensus.common.Scope;
+import io.opencensus.stats.Aggregation;
+import io.opencensus.stats.Aggregation.Distribution;
+import io.opencensus.stats.BucketBoundaries;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.Measure;
+import io.opencensus.stats.Measure.MeasureLong;
+import io.opencensus.stats.Measure.MeasureDouble;
+import io.opencensus.stats.Stats;
+import io.opencensus.stats.StatsRecorder;
+import io.opencensus.stats.View;
+import io.opencensus.tags.Tags;
+import io.opencensus.tags.Tagger;
+import io.opencensus.tags.TagContext;
+import io.opencensus.tags.TagContextBuilder;
+import io.opencensus.tags.TagKey;
+import io.opencensus.tags.TagValue;
+import io.opencensus.stats.View;
+import io.opencensus.stats.View.Name;
+import io.opencensus.stats.ViewManager;
+import io.opencensus.stats.View.AggregationWindow.Cumulative;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+
+public class Repl {
+ // The latency in milliseconds
+ private static final MeasureDouble MLatencyMs = MeasureDouble.create("repl/latency", "The latency in milliseconds per REPL loop", "ms");
+
+ // Counts the number of lines read in from standard input.
+ private static final MeasureLong MLinesIn = MeasureLong.create("repl/lines_in", "The number of lines read in", "1");
+
+ // Counts the number of non EOF(end-of-file) errors.
+ private static final MeasureLong MErrors = MeasureLong.create("repl/errors", "The number of errors encountered", "1");
+
+ // Counts/groups the lengths of lines read in.
+ private static final MeasureLong MLineLengths = MeasureLong.create("repl/line_lengths", "The distribution of line lengths", "By");
+
+ // The tag "method"
+ private static final TagKey KeyMethod = TagKey.create("method");
+
+ private static final Tagger TAGGER = Tags.getTagger();
+ private static final StatsRecorder STATSRECORDER = Stats.getStatsRecorder();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Metrics.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("EOF bye "+ e);
+ return;
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "repl", MErrors, new Long(1));
+ }
+ }
+ }
+
+ private static void recordStat(MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureLong ml, Long n) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(ml, n).record();
+ }
+ }
+
+ private static void recordTaggedStat(TagKey key, String value, MeasureDouble md, Double d) {
+ TagContext tctx = TAGGER.emptyBuilder().put(key, TagValue.create(value)).build();
+ try (Scope ss = TAGGER.withTagContext(tctx)) {
+ STATSRECORDER.newMeasureMap().put(md, d).record();
+ }
+ }
+
+ private static String processLine(String line) {
+ long startTimeNs = System.nanoTime();
+
+ try {
+ return line.toUpperCase();
+ } catch (Exception e) {
+ recordTaggedStat(KeyMethod, "processLine", MErrors, new Long(1));
+ return "";
+ } finally {
+ long totalTimeNs = System.nanoTime() - startTimeNs;
+ double timespentMs = (new Double(totalTimeNs))/1e6;
+ recordTaggedStat(KeyMethod, "processLine", MLatencyMs, timespentMs);
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+
+ String line = in.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ if (line != null && line.length() > 0) {
+ recordStat(MLinesIn, new Long(1));
+ recordStat(MLineLengths, new Long(line.length()));
+ }
+ }
+
+ private static void registerAllViews() {
+ // Defining the distribution aggregations
+ Aggregation latencyDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ 0.0, 25.0, 50.0, 75.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0, 2000.0, 4000.0, 6000.0)
+ ));
+
+ Aggregation lengthsDistribution = Distribution.create(BucketBoundaries.create(
+ Arrays.asList(
+ // [>=0B, >=5B, >=10B, >=20B, >=40B, >=60B, >=80B, >=100B, >=200B, >=400B, >=600B, >=800B, >=1000B]
+ 0.0, 5.0, 10.0, 20.0, 40.0, 60.0, 80.0, 100.0, 200.0, 400.0, 600.0, 800.0, 1000.0)
+ ));
+
+ // Define the count aggregation
+ Aggregation countAggregation = Aggregation.Count.create();
+
+ // So tagKeys
+ List noKeys = new ArrayList();
+
+ // Define the views
+ View[] views = new View[]{
+ View.create(Name.create("demo/latency"), "The distribution of latencies", MLatencyMs, latencyDistribution, Collections.singletonList(KeyMethod)),
+ View.create(Name.create("demo/lines_in"), "The number of lines read in from standard input", MLinesIn, countAggregation, noKeys),
+ View.create(Name.create("demo/errors"), "The number of errors encountered", MErrors, countAggregation, Collections.singletonList(KeyMethod)),
+ View.create(Name.create("demo/line_lengths"), "The distribution of line lengths", MLineLengths, lengthsDistribution, noKeys)
+ };
+
+ // Create the view manager
+ ViewManager vmgr = Stats.getViewManager();
+
+ // Then finally register the views
+ for (View view : views)
+ vmgr.registerView(view);
+ }
+
+ private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ // Firstly register the views
+ registerAllViews();
+
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ StackdriverStatsExporter.createAndRegister(
+ StackdriverStatsConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+ }
+
+ private static String envOrAlternative(String key, String ...alternatives) {
+ String value = System.getenv().get(key);
+ if (value != null && value != "")
+ return value;
+
+ // Otherwise now look for the alternatives.
+ for (String alternative : alternatives) {
+ if (alternative != null && alternative != "") {
+ value = alternative;
+ break;
+ }
+ }
+
+ return value;
+ }
+}
+```
+
+#### Viewing your Metrics on Stackdriver
+With the above you should now be able to navigate to the [Google Cloud Platform console](https://app.google.stackdriver.com/metrics-explorer), select your project, and view the metrics.
+
+In the query box to find metrics, type `quickstart` as a prefix:
+
+
+
+And on selecting any of the metrics e.g. `quickstart/demo/lines_in`, we’ll get...
+
+
+
+Let’s examine the latency buckets:
+
+
+
+On checking out the Stacked Area display of the latency, we can see that the 99th percentile latency was 24.75ms. And, for `line_lengths`:
+
+
diff --git a/content/quickstart/java/tracing.md b/content/quickstart/java/tracing.md
new file mode 100644
index 00000000..0d5cc391
--- /dev/null
+++ b/content/quickstart/java/tracing.md
@@ -0,0 +1,1100 @@
+---
+title: "Tracing"
+date: 2018-07-16T14:29:21-07:00
+draft: false
+class: "shadowed-image lightbox"
+---
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+#### Table of contents
+
+- [Requirements](#background)
+- [Installation](#installation)
+- [Getting started](#getting-started)
+- [Enable Tracing](#enable-tracing)
+ - [Import Packages](#import-tracing-packages)
+ - [Instrumentation](#instrument-tracing)
+- [Exporting to Stackdriver](#exporting-to-stackdriver)
+ - [Import Packages](#import-exporting-packages)
+ - [Export Traces](#export-traces)
+ - [Create Annotations](#create-annotations)
+- [Viewing your Traces on Stackdriver](#viewing-your-traces-on-stackdriver)
+
+In this quickstart, we’ll learn gleam insights into a segment of code and learn how to:
+
+1. Trace the code using [OpenCensus Tracing](/core-concepts/tracing)
+2. Register and enable an exporter for a [backend](http://localhost:1313/core-concepts/exporters/#supported-backends) of our choice
+3. View traces on the backend of our choice
+
+#### Requirements
+- Java 8+
+- Google Cloud Platform account anproject
+- Google Stackdriver Tracing enabled on your project (Need help? [Click here](/codelabs/stackdriver))
+
+#### Installation
+```bash
+mvn archetype:generate \
+ -DgroupId=io.opencensus.quickstart \
+ -DartifactId=repl-app \
+ -DarchetypeArtifactId=maven-archetype-quickstart \
+ -DinteractiveMode=false \
+
+cd repl-app/src/main/java/io/opencensus/quickstart
+
+mv App.Java Repl.java
+```
+Put this in your newly generated `pom.xml` file:
+
+```xml
+
+ 4.0.0
+ io.opencensus.quickstart
+ quickstart
+ jar
+ 1.0-SNAPSHOT
+ quickstart
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.14.0
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ Repl
+ io.opencensus.quickstart.Repl
+
+
+
+
+
+
+
+
+
+
+```
+
+Put this in `src/main/java/io/opencensus/quickstart/Repl.java`:
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+public class Repl {
+ public static void main(String ...args) {
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ System.out.print("> ");
+ System.out.flush();
+ String line = stdin.readLine();
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+}
+{{}}
+
+Install required dependencies:
+```bash
+mvn install
+```
+
+#### Getting Started
+Let's first run the application and see what we have.
+```bash
+mvn exec:java -Dexec.mainClass=io.opencensus.quickstart.Repl
+```
+We have ourselves a lower-to-UPPERCASE REPL. You should see something like this:
+
+
+Now, in preparation of tracing, lets abstract some of the core functionality in `main()` to a suite of helper functions:
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+public class Repl {
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static String readLine(BufferedReader in) {
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ System.err.println("Failed to read line "+ e);
+ } finally {
+ return line;
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = readLine(in);
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+
+#### Enable Tracing
+
+##### Import Packages
+To enable tracing, we’ll declare the dependencies in your `pom.xml` file:
+
+{{}}
+{{}}
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+{{}}
+
+{{}}
+
+ 4.0.0
+ io.opencensus.quickstart
+ quickstart
+ jar
+ 1.0-SNAPSHOT
+ quickstart
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ Repl
+ io.opencensus.quickstart.Repl
+
+
+
+
+
+
+
+
+
+
+{{}}
+{{}}
+
+Now add the import statements to your `Repl.java`:
+
+{{}}
+{{}}
+import io.opencensus.common.Scope;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+public class Repl {
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ return line.toUpperCase();
+ }
+
+ private static String readLine(BufferedReader in) {
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ System.err.println("Failed to read line "+ e);
+ } finally {
+ return line;
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ System.out.print("> ");
+ System.out.flush();
+ String line = readLine(in);
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+}
+{{}}
+{{}}
+
+##### Instrumentation
+We will begin by creating a private static `Tracer` as a property of our Repl class.
+
+```java
+private static final Tracer TRACER = Tracing.getTracer();
+```
+
+We will be tracing the execution as it flows through `readEvaluateProcess`, `readLine`, and finally `processLine`.
+
+To do this, we will create a [span](http://localhost:1313/core-concepts/tracing/#spans).
+
+You can create a span by inserting the following line in each of the three functions:
+```java
+Scope ss = TRACER.spanBuilder("repl").startScopedSpan();
+```
+
+Here is our updated state of `Repl.java`:
+
+```java
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+public class Repl {
+ private static final Tracer TRACER = Tracing.getTracer();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ try (Scope ss = TRACER.spanBuilder("processLine").startScopedSpan()) {
+ return line.toUpperCase();
+ }
+ }
+
+ private static String readLine(BufferedReader in) {
+ Scope ss = TRACER.spanBuilder("readLine").startScopedSpan();
+
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ Span span = TRACER.getCurrentSpan();
+ span.setStatus(Status.INTERNAL.withDescription(e.toString()));
+ } finally {
+ ss.close();
+ return line;
+ }
+ }
+}
+```
+
+#### Exporting to Stackdriver
+
+##### Import Packages
+To turn on Stackdriver Tracing, we’ll need to declare the Stackdriver dependency in your `pom.xml`:
+
+{{}}
+{{}}
+
+ io.opencensus
+ opencensus-exporter-trace-stackdriver
+ ${opencensus.version}
+
+{{}}
+
+{{}}
+
+ 4.0.0
+ io.opencensus.quickstart
+ quickstart
+ jar
+ 1.0-SNAPSHOT
+ quickstart
+ http://maven.apache.org
+
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-exporter-trace-stackdriver
+ ${opencensus.version}
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.7.0
+
+ 1.8
+ 1.8
+
+
+
+
+ org.codehaus.mojo
+ appassembler-maven-plugin
+ 1.10
+
+
+
+ Repl
+ io.opencensus.quickstart.Repl
+
+
+
+
+
+
+
+
+
+
+{{}}
+{{}}
+
+Now add the import statements to your `Repl.java`:
+
+{{}}
+{{}}
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.trace.AttributeValue;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.samplers.Samplers;
+
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.AttributeValue;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.samplers.Samplers;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+public class Repl {
+ private static final Tracer TRACER = Tracing.getTracer();
+
+ public static void main(String ...args) {
+ // Step 1. Our OpenCensus initialization will eventually go here
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ try (Scope ss = TRACER.spanBuilder("processLine").startScopedSpan()) {
+ return line.toUpperCase();
+ }
+ }
+
+ private static String readLine(BufferedReader in) {
+ Scope ss = TRACER.spanBuilder("readLine").startScopedSpan();
+
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ Span span = TRACER.getCurrentSpan();
+ span.setStatus(Status.INTERNAL.withDescription(e.toString()));
+ } finally {
+ ss.close();
+ return line;
+ }
+ }
+}
+{{}}
+{{}}
+
+##### Export Traces
+
+Now it is time to implement `Step 1: OpenCensus Initialization`!
+
+We will create a function called `setupOpenCensusAndStackdriverExporter` and call it from our `main` function:
+
+{{}}
+{{}}
+public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Tracing.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ //..
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.AttributeValue;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.samplers.Samplers;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+public class Repl {
+ private static final Tracer TRACER = Tracing.getTracer();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Tracing.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ try (Scope ss = TRACER.spanBuilder("processLine").startScopedSpan()) {
+ return line.toUpperCase();
+ }
+ }
+
+ private static String readLine(BufferedReader in) {
+ Scope ss = TRACER.spanBuilder("readLine").startScopedSpan();
+
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ Span span = TRACER.getCurrentSpan();
+ span.setStatus(Status.INTERNAL.withDescription(e.toString()));
+ } finally {
+ ss.close();
+ return line;
+ }
+ }
+}
+{{}}
+{{}}
+
+We will do three things in our `setupOpenCensusAndStackdriverExporter` function:
+
+1. Set our [sampling rate](http://localhost:1313/core-concepts/tracing/#sampling)
+```java
+TraceConfig traceConfig = Tracing.getTraceConfig();
+// For demo purposes, lets always sample.
+traceConfig.updateActiveTraceParams(
+ traceConfig.getActiveTraceParams().toBuilder().setSampler(Samplers.alwaysSample()).build());
+```
+
+2. Retrieve our Google Cloud Project ID
+```java
+// Implementation will come later
+String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+```
+
+3. Export our Traces to Stackdriver
+```java
+StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+```
+
+The function ends up looking like this:
+
+{{}}
+{{}}
+private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ TraceConfig traceConfig = Tracing.getTraceConfig();
+ // For demo purposes, lets always sample.
+ traceConfig.updateActiveTraceParams(
+ traceConfig.getActiveTraceParams().toBuilder().setSampler(Samplers.alwaysSample()).build());
+
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.AttributeValue;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.samplers.Samplers;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+public class Repl {
+ private static final Tracer TRACER = Tracing.getTracer();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Tracing.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ try (Scope ss = TRACER.spanBuilder("processLine").startScopedSpan()) {
+ return line.toUpperCase();
+ }
+ }
+
+ private static String readLine(BufferedReader in) {
+ Scope ss = TRACER.spanBuilder("readLine").startScopedSpan();
+
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ Span span = TRACER.getCurrentSpan();
+ span.setStatus(Status.INTERNAL.withDescription(e.toString()));
+ } finally {
+ ss.close();
+ return line;
+ }
+ }
+
+ private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ TraceConfig traceConfig = Tracing.getTraceConfig();
+ // For demo purposes, lets always sample.
+ traceConfig.updateActiveTraceParams(
+ traceConfig.getActiveTraceParams().toBuilder().setSampler(Samplers.alwaysSample()).build());
+
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+ }
+}
+{{}}
+{{}}
+
+Now we will handle our implementation of `envOrAlternative`:
+
+{{}}
+{{}}
+private static String envOrAlternative(String key, String ...alternatives) {
+ String value = System.getenv().get(key);
+ if (value != null && value != "")
+ return value;
+
+ // Otherwise now look for the alternatives.
+ for (String alternative : alternatives) {
+ if (alternative != null && alternative != "") {
+ value = alternative;
+ break;
+ }
+ }
+
+ return value;
+}
+{{}}
+
+{{}}
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.AttributeValue;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.samplers.Samplers;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+public class Repl {
+ private static final Tracer TRACER = Tracing.getTracer();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Tracing.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ try (Scope ss = TRACER.spanBuilder("processLine").startScopedSpan()) {
+ return line.toUpperCase();
+ }
+ }
+
+ private static String readLine(BufferedReader in) {
+ Scope ss = TRACER.spanBuilder("readLine").startScopedSpan();
+
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ Span span = TRACER.getCurrentSpan();
+ span.setStatus(Status.INTERNAL.withDescription(e.toString()));
+ } finally {
+ ss.close();
+ return line;
+ }
+ }
+
+ private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ TraceConfig traceConfig = Tracing.getTraceConfig();
+ // For demo purposes, lets always sample.
+ traceConfig.updateActiveTraceParams(
+ traceConfig.getActiveTraceParams().toBuilder().setSampler(Samplers.alwaysSample()).build());
+
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+ }
+
+ private static String envOrAlternative(String key, String ...alternatives) {
+ String value = System.getenv().get(key);
+ if (value != null && value != "")
+ return value;
+
+ // Otherwise now look for the alternatives.
+ for (String alternative : alternatives) {
+ if (alternative != null && alternative != "") {
+ value = alternative;
+ break;
+ }
+ }
+
+ return value;
+ }
+}
+{{}}
+{{}}
+
+
+##### Create Annotations
+When looking at our traces on a backend (such as Stackdriver), we can add metadata to our traces to increase our post-mortem insight.
+
+Let's record the length of each requested string so that it is available to view when we are looking at our traces.
+
+To do this, we'll dive in to `readEvaluateProcess`.
+
+Between `String line = readLine(in)` and `String processed = processLine(line)`, add this:
+
+```java
+// Annotate the span to indicate we are invoking processLine next.
+Map attributes = new HashMap();
+attributes.put("len", AttributeValue.longAttributeValue(line.length()));
+attributes.put("use", AttributeValue.stringAttributeValue("repl"));
+Span span = TRACER.getCurrentSpan();
+span.addAnnotation("Invoking processLine", attributes);
+```
+
+The final state of `Repl.java` should be this:
+
+```java
+package io.opencensus.quickstart;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.util.HashMap;
+import java.util.Map;
+
+import io.opencensus.common.Scope;
+import io.opencensus.trace.AttributeValue;
+import io.opencensus.trace.config.TraceConfig;
+import io.opencensus.trace.samplers.Samplers;
+import io.opencensus.trace.Span;
+import io.opencensus.trace.Status;
+import io.opencensus.trace.Tracer;
+import io.opencensus.trace.Tracing;
+
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+public class Repl {
+ private static final Tracer TRACER = Tracing.getTracer();
+
+ public static void main(String ...args) {
+ // Step 1. Enable OpenCensus Tracing.
+ try {
+ setupOpenCensusAndStackdriverExporter();
+ } catch (IOException e) {
+ System.err.println("Failed to create and register OpenCensus Stackdriver Trace exporter "+ e);
+ return;
+ }
+
+ // Step 2. The normal REPL.
+ BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
+
+ while (true) {
+ try {
+ readEvaluateProcess(stdin);
+ } catch (IOException e) {
+ System.err.println("Exception "+ e);
+ }
+ }
+ }
+
+ private static String processLine(String line) {
+ try (Scope ss = TRACER.spanBuilder("processLine").startScopedSpan()) {
+ return line.toUpperCase();
+ }
+ }
+
+ private static String readLine(BufferedReader in) {
+ Scope ss = TRACER.spanBuilder("readLine").startScopedSpan();
+
+ String line = "";
+
+ try {
+ line = in.readLine();
+ } catch (Exception e) {
+ Span span = TRACER.getCurrentSpan();
+ span.setStatus(Status.INTERNAL.withDescription(e.toString()));
+ } finally {
+ ss.close();
+ return line;
+ }
+ }
+
+ private static void readEvaluateProcess(BufferedReader in) throws IOException {
+ try (Scope ss = TRACER.spanBuilder("repl").startScopedSpan()) {
+ System.out.print("> ");
+ System.out.flush();
+ String line = readLine(in);
+
+ // Annotate the span to indicate we are invoking processLine next.
+ Map attributes = new HashMap();
+ attributes.put("len", AttributeValue.longAttributeValue(line.length()));
+ attributes.put("use", AttributeValue.stringAttributeValue("repl"));
+ Span span = TRACER.getCurrentSpan();
+ span.addAnnotation("Invoking processLine", attributes);
+
+ String processed = processLine(line);
+ System.out.println("< " + processed + "\n");
+ }
+ }
+
+ private static void setupOpenCensusAndStackdriverExporter() throws IOException {
+ TraceConfig traceConfig = Tracing.getTraceConfig();
+ // For demo purposes, lets always sample.
+ traceConfig.updateActiveTraceParams(
+ traceConfig.getActiveTraceParams().toBuilder().setSampler(Samplers.alwaysSample()).build());
+
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+ }
+
+ private static String envOrAlternative(String key, String ...alternatives) {
+ String value = System.getenv().get(key);
+ if (value != null && value != "")
+ return value;
+
+ // Otherwise now look for the alternatives.
+ for (String alternative : alternatives) {
+ if (alternative != null && alternative != "") {
+ value = alternative;
+ break;
+ }
+ }
+
+ return value;
+ }
+}
+```
+
+#### Viewing your Traces on Stackdriver
+With the above you should now be able to navigate to the [Google Cloud Platform console](https://console.cloud.google.com/traces/traces), select your project, and view the traces.
+
+
+
+And on clicking on one of the traces, we should be able to see the annotation whose description `isInvoking processLine` and on clicking on it, it should show our attributes `len` and `use`.
+
+
diff --git a/content/quickstart/python/_index.md b/content/quickstart/python/_index.md
new file mode 100644
index 00000000..e329b30b
--- /dev/null
+++ b/content/quickstart/python/_index.md
@@ -0,0 +1,18 @@
+---
+title: "Python"
+date: 2018-07-16T14:29:03-07:00
+draft: false
+class: "resized-logo"
+---
+
+
+
+In this quickstart, using OpenCensus Python, you will gain hands-on experience with:
+{{% children %}}
+
+For full API references, please take a look at:
+
+Resource|Link
+---|---
+Python API Documentation|https://census-instrumentation.github.io/opencensus-python
+Github repository|https://github.com/census-instrumentation/opencensus-python
diff --git a/content/quickstart/python/metrics.md b/content/quickstart/python/metrics.md
new file mode 100644
index 00000000..73b16d10
--- /dev/null
+++ b/content/quickstart/python/metrics.md
@@ -0,0 +1,304 @@
+---
+title: "Metrics"
+draft: false
+class: "shadowed-image lightbox"
+---
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+
+This tutorial is also incomplete, pending OpenCensus Python adding Metrics exporters
+{{% /notice %}}
+
+#### Table of contents
+
+- [Requirements](#background)
+- [Installation](#installation)
+- [Brief Overview](#brief-overview)
+- [Getting started](#getting-started)
+- [Enable Metrics](#enable-metrics)
+ - [Import Packages](#import-metrics-packages)
+ - [Create Metrics](#create-metrics)
+ - [Create Tags](#create-tags)
+ - [Inserting Tags](#inserting-tags)
+ - [Recording Metrics](#recording-metrics)
+- [Enable Views](#enable-views)
+ - [Import Packages](#import-views-packages)
+ - [Create Views](#create-views)
+ - [Register Views](#register-views)
+- [Exporting to Stackdriver](#exporting-to-stackdriver)
+ - [Import Packages](#import-exporting-packages)
+ - [Export Views](#export-views)
+- [Viewing your Metrics on Stackdriver](#viewing-your-metrics-on-stackdriver)
+
+In this quickstart, we’ll learn gleam insights into a segment of code and learn how to:
+
+1. Collect metrics using [OpenCensus Metrics](/core-concepts/metrics) and [Tags](/core-concepts/tags)
+2. Register and enable an exporter for a [backend](/core-concepts/exporters/#supported-backends) of our choice
+3. View the metrics on the backend of our choice
+
+#### Requirements
+- Python2 and above
+- Google Cloud Platform account anproject
+- Google Stackdriver Tracing enabled on your project (Need help? [Click here](/codelabs/stackdriver))
+
+#### Installation
+
+OpenCensus: `pip install opencensus`
+
+#### Brief Overview
+By the end of this tutorial, we will do these four things to obtain metrics using OpenCensus:
+
+1. Create quantifiable metrics (numerical) that we will record
+2. Create [tags](/core-concepts/tags) that we will associate with our metrics
+3. Organize our metrics, similar to a writing a report, in to a `View`
+4. Export our views to a backend (Stackdriver in this case)
+
+
+#### Getting Started
+
+{{% notice note %}}
+Unsure how to write and execute Python code? [Click here](https://docs.python.org/).
+{{% /notice %}}
+
+We will be a simple "read-evaluate-print" (REPL) app. In there we'll collect some metrics to observe the work that is going on this code, such as:
+
+- Latency per processing loop
+- Number of lines read
+- Number of errors
+- Line lengths
+
+First, create a file called `repl.py`.
+```bash
+touch repl.py
+```
+
+Next, put the following code inside of `repl.py`:
+
+{{}}
+#!/usr/bin/env python
+
+import sys
+
+def main():
+ # In a REPL:
+ #1. Read input
+ #2. process input
+ while True:
+ line = sys.stdin.readline()
+ print(line.upper())
+{{}}
+
+You can run the code via `python repl.py`.
+
+#### Create and record Metrics
+
+
+##### Import Packages
+
+To enable metrics, we’ll import a number of core and OpenCensus packages
+
+{{}}
+{{}}
+import opencensus.stats import aggregation as aggregation_module
+import opencensus.stats import measure as measure_module
+import opencensus.stats import stats as stats_module
+import opencensus.stats import view as view_module
+import opencensus.tags import tag_key as tag_key_module
+import opencensus.tags import tag_map as tag_map_module
+import opencensus.tags import tag_value as tag_value_module
+
+# Create the measures
+# The latency in milliseconds
+m_latency_ms = measure_module.MeasureFloat("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+# Counts the number of lines read in from standard input
+m_lines_in = measure_module.MeasureInt("repl/lines_in", "The number of lines read in", "1")
+
+# Encounters the number of non EOF(end-of-file) errors.
+m_errors = measure_module.Int("repl/errors", "The number of errors encountered", "1")
+
+# Counts/groups the lengths of lines read in.
+m_line_lengths = measure_module.Int("repl/line_lengths", "The distribution of line lengths", "By")
+{{}}
+
+{{}}
+#!/usr/bin/env python
+
+import sys
+import time
+
+from opencensus.stats import aggregation as aggregation_module
+from opencensus.stats import measure as measure_module
+from opencensus.stats import stats
+from opencensus.tags import tag_key as tag_key_module
+from opencensus.tags import tag_map as tag_map_module
+from opencensus.tags import tag_value as tag_value_module
+
+# Create the measures
+# The latency in milliseconds
+m_latency_ms = measure_module.MeasureFloat("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+# Counts the number of lines read in from standard input
+m_lines_in = measure_module.MeasureInt("repl/lines_in", "The number of lines read in", "1")
+
+# Encounters the number of non EOF(end-of-file) errors.
+m_errors = measure_module.MeasureInt("repl/errors", "The number of errors encountered", "1")
+
+# Counts/groups the lengths of lines read in.
+m_line_lengths = measure_module.MeasureInt("repl/line_lengths", "The distribution of line lengths", "By")
+
+# The stats recorder
+stats_recorder = stats.Stats().stats_recorder
+
+# Create the tag key
+key_method = tag_key_module.TagKey("method")
+
+def main():
+ # In a REPL:
+ # 1. Read input
+ # 2. process input
+ while True:
+ readEvaluateProcess()
+
+def readEvaluateProcess():
+ line = sys.stdin.readline()
+ start = time.time()
+ print(line.upper())
+
+ # Now record the stats
+ # Create the measure_map into which we'll insert the measurements
+ mmap = stats_recorder.new_measurement_map()
+ end_ms = (time.time() - start) * 1000.0 # Seconds to milliseconds
+
+ # Record the latency
+ mmap.measure_float_put(m_latency_ms, end_ms)
+
+ # Record the number of lines in
+ mmap.measure_int_put(m_lines_in, 1)
+
+ # Record the line length
+ mmap.measure_int_put(m_line_lengths, len(line))
+
+ tmap = tag_map_module.TagMap()
+ tmap.insert(key_method, tag_value_module.TagValue("repl"))
+
+ # Insert the tag map finally
+ mmap.record(tmap)
+
+if __name__ == "__main__":
+ main()
+{{}}
+{{}}
+
+#### With views and all enabled
+```python
+#!/usr/bin/env python
+
+import sys
+import time
+
+from opencensus.stats import stats
+from opencensus.stats import aggregation as aggregation_module
+from opencensus.stats import measure as measure_module
+from opencensus.stats import view as view_module
+from opencensus.tags import tag_key as tag_key_module
+from opencensus.tags import tag_map as tag_map_module
+from opencensus.tags import tag_value as tag_value_module
+
+# Create the measures
+# The latency in milliseconds
+m_latency_ms = measure_module.MeasureFloat("repl/latency", "The latency in milliseconds per REPL loop", "ms")
+
+# Counts the number of lines read in from standard input
+m_lines_in = measure_module.MeasureInt("repl/lines_in", "The number of lines read in", "1")
+
+# Encounters the number of non EOF(end-of-file) errors.
+m_errors = measure_module.MeasureInt("repl/errors", "The number of errors encountered", "1")
+
+# Counts/groups the lengths of lines read in.
+m_line_lengths = measure_module.MeasureInt("repl/line_lengths", "The distribution of line lengths", "By")
+
+# The stats recorder
+stats_recorder = stats.Stats().stats_recorder
+
+# Create the tag key
+key_method = tag_key_module.TagKey("method")
+
+latency_view = view_module.View("demo/latency", "The distribution of the latencies",
+ [key_method],
+ m_latency_ms,
+ # Latency in buckets:
+ # [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
+ aggregation_module.DistributionAggregation([0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000]))
+
+line_count_view = view_module.View("demo/lines_in", "The number of lines from standard input",
+ [],
+ m_lines_in,
+ aggregation_module.CountAggregation())
+
+error_count_view = view_module.View("demo/errors", "The number of errors encountered",
+ [key_method],
+ m_errors,
+ aggregation_module.CountAggregation())
+
+line_length_view = view_module.View("demo/line_lengths", "Groups the lengths of keys in buckets",
+ [],
+ m_line_lengths,
+ # Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
+ aggregation_module.DistributionAggregation([0, 5, 10, 15, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000]))
+
+def main():
+ # In a REPL:
+ # 1. Read input
+ # 2. process input
+ while True:
+ readEvaluateProcess()
+
+def readEvaluateProcess():
+ line = sys.stdin.readline()
+ start = time.time()
+ print(line.upper())
+
+ # Now record the stats
+ # Create the measure_map into which we'll insert the measurements
+ mmap = stats_recorder.new_measurement_map()
+ end_ms = (time.time() - start) * 1000.0 # Seconds to milliseconds
+
+ # Record the latency
+ mmap.measure_float_put(m_latency_ms, end_ms)
+
+ # Record the number of lines in
+ mmap.measure_int_put(m_lines_in, 1)
+
+ # Record the line length
+ mmap.measure_int_put(m_line_lengths, len(line))
+
+ tmap = tag_map_module.TagMap()
+ tmap.insert(key_method, tag_value_module.TagValue("repl"))
+
+ # Insert the tag map finally
+ mmap.record(tmap)
+
+if __name__ == "__main__":
+ main()
+```
+
+#### Viewing your Metrics on Stackdriver
+With the above you should now be able to navigate to the [Google Cloud Platform console](https://app.google.stackdriver.com/metrics-explorer), select your project, and view the metrics.
+
+In the query box to find metrics, type `quickstart` as a prefix:
+
+
+
+And on selecting any of the metrics e.g. `quickstart/demo/lines_in`, we’ll get...
+
+
+
+Let’s examine the latency buckets:
+
+
+
+On checking out the Stacked Area display of the latency, we can see that the 99th percentile latency was 24.75ms. And, for `line_lengths`:
+
+
diff --git a/content/quickstart/python/tracing.md b/content/quickstart/python/tracing.md
new file mode 100644
index 00000000..956408ea
--- /dev/null
+++ b/content/quickstart/python/tracing.md
@@ -0,0 +1,262 @@
+---
+title: "Tracing"
+date: 2018-07-22T20:29:06-07:00
+draft: false
+class: "shadowed-image lightbox"
+---
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+#### Table of contents
+
+- [Requirements](#background)
+- [Installation](#installation)
+- [Getting started](#getting-started)
+- [Enable Tracing](#enable-tracing)
+ - [Import Packages](#import-tracing-packages)
+ - [Instrumentation](#instrument-tracing)
+- [Exporting to Stackdriver](#exporting-to-stackdriver)
+ - [Import Packages](#import-exporting-packages)
+ - [Export Traces](#export-traces)
+ - [Create Annotations](#create-annotations)
+- [Viewing your Traces on Stackdriver](#viewing-your-traces-on-stackdriver)
+
+In this quickstart, we’ll learn gleam insights into a segment of code and learn how to:
+
+1. Trace the code using [OpenCensus Tracing](/core-concepts/tracing)
+2. Register and enable an exporter for a [backend](/core-concepts/exporters/#supported-backends) of our choice
+3. View traces on the backend of our choice
+
+#### Requirements
+- Python
+- Google Cloud Platform account anproject
+- Google Stackdriver Tracing enabled on your project (Need help? [Click here](/codelabs/stackdriver))
+
+#### Installation
+
+OpenCensus: `pip install opencensus`
+
+#### Getting Started
+
+{{% notice note %}}
+Unsure how to write and execute Python code? [Click here](https://docs.python.org/).
+{{% /notice %}}
+
+It would be nice if we could trace the following code, thus giving us observability in to how the code functions.
+
+First, create a file called `repl.py`.
+```bash
+touch repl.py
+```
+
+Next, put the following code inside of `repl.py`:
+
+{{}}
+#!/usr/bin/env python
+
+import sys
+
+def main():
+ # In a REPL:
+ #1. Read input
+ #2. process input
+ while True:
+ line = sys.stdin.readline()
+ print(line.upper())
+{{}}
+
+You can run the code via `python repl.py`.
+
+#### Enable Tracing
+
+
+##### Import Packages
+
+To enable tracing, we’ll import the `trace.tracer` package from `opencensus`
+{{}}
+{{}}
+from opencensus.trace.tracer import Tracer
+{{}}
+
+{{}}
+#!/usr/bin/env python
+
+import sys
+
+from opencensus.trace.tracer import Tracer
+
+def main():
+ # In a REPL:
+ #1. Read input
+ #2. process input
+ while True:
+ line = sys.stdin.readline()
+ print(line.upper())
+
+{{}}
+{{}}
+
+
+##### Instrumentation
+
+We will be tracing the execution as it starts in `readEvaluateProcess`, goes to `readLine`, and finally travels through `processLine`.
+
+To accomplish this, we must do two things:
+
+**1. Create a span in each of the three functions**
+
+You can create a span by inserting the following two lines in each of the three functions:
+```python
+with tracer.span(name=name):
+ # Code here
+ pass
+```
+
+{{}}
+{{}}
+from opencensus.trace.tracer import Tracer
+
+tracer = Tracer()
+with tracer.span(name="repl") as span:
+ print(line.upper())
+{{}}
+
+{{}}
+#!/usr/bin/env python
+
+import sys
+
+from opencensus.trace.tracer import Tracer
+
+def main():
+ # In a REPL:
+ # 1. Read input
+ # 2. process input
+ while True:
+ readEvaluateProcess()
+
+def readEvaluateProcess():
+ tracer = Tracer()
+ with tracer.span(name="repl") as span:
+ line = sys.stdin.readline()
+ print(line.upper())
+{{}}
+{{}}
+
+When creating a new span with `tracer.span("spanName")`, the package first checks if a parent Span already exists in the current thread local storage/context. If it exists, a child span is created. Otherwise, a newly created span is inserted in to the thread local storage/context to become the parent Span.
+
+#### Exporting traces to Stackdriver
+
+
+##### Import Packages
+To turn on Stackdriver Tracing, we’ll need to import the Stackdriver exporter from `opencensus.trace.exporters`
+
+{{}}
+{{}}
+from opencensus.trace.exporters import stackdriver_exporter
+from opencensus.trace.exporters.transports.background_thread import BackgroundThreadTransport
+from opencensus.trace.samplers import always_on
+{{}}
+
+{{}}
+#!/usr/bin/env python
+
+import os
+import sys
+
+from opencensus.trace.tracer import Tracer
+
+# Firstly create the exporter
+sde = stackdriver_exporter.StackdriverExporter(
+ project_id=os.environ.get("GCP_PROJECT_ID"),
+ transport=BackgroundThreadTransport
+)
+
+def main():
+ # Firstly enable the exporter
+
+ # In a REPL:
+ # 1. Read input
+ # 2. process input
+ while True:
+ readEvaluateProcess()
+
+def readEvaluateProcess():
+ # For demo purposes, we are always sampling
+ tracer = Tracer(sampler=always_on.AlwaysOnSampler(), exporter=sde)
+ with tracer.span(name="repl") as span:
+ line = sys.stdin.readline()
+ out = processInput(tracer, line)
+ print("< %s"%(out))
+
+def processInput(tracer, data):
+ with tracer.span(name='processInput') as span:
+ return data.upper()
+{{}}
+{{}}
+
+##### Create Annotations
+We can add metadata to our traces to increase our post-mortem insight.
+
+Let's record the length of each requested string so that it is available to view when we are looking at our traces. We can do this by annotating our `readEvaluateProcess` function.
+
+{{}}
+{{}}
+span.add_annotation("Invoking processLine", len=len(line), use="repl")
+{{}}
+
+{{}}
+#!/usr/bin/env python
+
+import os
+import sys
+
+from opencensus.trace.tracer import Tracer
+from opencensus.trace.exporters import stackdriver_exporter
+from opencensus.trace.exporters.transports.background_thread import BackgroundThreadTransport
+from opencensus.trace.samplers import always_on
+
+# Firstly create the exporter
+sde = stackdriver_exporter.StackdriverExporter(
+ project_id=os.environ.get("GCP_PROJECT_ID"),
+ transport=BackgroundThreadTransport
+)
+
+def main():
+ # Firstly enable the exporter
+
+ # In a REPL:
+ # 1. Read input
+ # 2. process input
+ while True:
+ readEvaluateProcess()
+
+def readEvaluateProcess():
+ # For demo purposes, we are always sampling
+ tracer = Tracer(sampler=always_on.AlwaysOnSampler(), exporter=sde)
+ with tracer.span(name="repl") as span:
+ line = sys.stdin.readline()
+ span.add_annotation("Invoking processLine", len_=len(line), use="repl")
+ out = processInput(tracer, line)
+ print("< %s"%(out))
+ span.finish()
+
+def processInput(tracer, data):
+ with tracer.span(name='processInput'):
+ return data.upper()
+
+if __name__ == "__main__":
+ main()
+{{}}
+{{}}
+
+#### Viewing your Traces on Stackdriver
+With the above you should now be able to navigate to the [Google Cloud Platform console](https://console.cloud.google.com/traces/traces), select your project, and view the traces.
+
+
+
+And on clicking on one of the traces, we should be able to see the annotation whose description `isInvoking processLine` and on clicking on it, it should show our attributes `len` and `use`.
+
+
diff --git a/content/reporting-issues/_index.md b/content/reporting-issues/_index.md
new file mode 100644
index 00000000..05a5c7ff
--- /dev/null
+++ b/content/reporting-issues/_index.md
@@ -0,0 +1,23 @@
+---
+title: "Reporting issues"
+date: 2018-07-22T20:14:00-07:00
+draft: false
+weight: 120
+---
+
+If you'd like to report any bugs, security issues or any other sort of issues, please find below, links to each language implementation's repository:
+
+Language|URL
+---|---
+Go|https://github.com/census-instrumentation/opencensus-go
+Java|https://github.com/census-instrumentation/opencensus-java
+Python|https://github.com/census-instrumentation/opencensus-python
+PHP|https://github.com/census-instrumentation/opencensus-php
+Node.js|https://github.com/census-instrumentation/opencensus-node
+C#|https://github.com/census-instrumentation/opencensus-csharp
+Erlang|https://github.com/census-instrumentation/opencensus-erlang
+
+For all other issues, please send an email to
+[census-developers@googlegroups.com](mailto:census-developers@googlegroups.com)
+
+or reach out via the [Gitter channel](https://gitter.im/census-instrumentation/Lobby)
diff --git a/content/resources/_index.md b/content/resources/_index.md
new file mode 100644
index 00000000..df47b3ff
--- /dev/null
+++ b/content/resources/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Resources"
+date: 2018-07-16T14:47:02-07:00
+draft: true
+---
+
diff --git a/content/roadmap.md b/content/roadmap.md
deleted file mode 100644
index 71422935..00000000
--- a/content/roadmap.md
+++ /dev/null
@@ -1,30 +0,0 @@
-+++
-Description = "roadmap"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-
-title = "Roadmap"
-date = "2018-05-11T12:09:08-05:00"
-+++
-
-Read OpenCensus’s journey ahead: [platforms and languages](https://opensource.googleblog.com/2018/05/opencensus-journey-ahead-part-1.html).
-
----
-
-#### Languages
-
-{{< sc_supportedLanguages />}}
-
-
----
-
-#### Exporters
-
-{{< sc_supportedExporters />}}
-
----
-
-#### How do I contribute?
-
-Contributions are highly appreciated! Please follow the steps to [contribute](/community).
diff --git a/content/roadmap/_index.md b/content/roadmap/_index.md
new file mode 100644
index 00000000..9d0482bc
--- /dev/null
+++ b/content/roadmap/_index.md
@@ -0,0 +1,20 @@
+---
+title: "Roadmap"
+date: 2018-07-16T14:49:14-07:00
+draft: false
+weight: 100
+---
+Read OpenCensus’s journey ahead: [platforms and languages](https://opensource.googleblog.com/2018/05/opencensus-journey-ahead-part-1.html).
+
+
+#### Languages
+
+{{}}
+
+#### Exporters
+
+T Backend supports Tracing
+
+S Backend supports Stats
+
+{{}}
diff --git a/content/ruby.md b/content/ruby.md
deleted file mode 100644
index 4b2e94b1..00000000
--- a/content/ruby.md
+++ /dev/null
@@ -1,103 +0,0 @@
-+++
-Description = "ruby"
-Tags = ["Development", "OpenCensus"]
-Categories = ["Development", "OpenCensus"]
-menu = "main"
-type = "leftnav"
-title = "Ruby"
-date = "2018-05-18T13:52:18-05:00"
-+++
-
-
-This example application demonstrates how to use OpenCensus to record traces for a Sinatra-based web application. You can find the source code for the application at https://github.com/census-instrumentation/opencensus-ruby/tree/master/examples/hello-world.
-
-
-#### API Documentation
-The OpenCensus Ruby API is documented at http://www.rubydoc.info/gems/opencensus.
-
----
-
-#### Prerequisites
-Ruby 2.2 or later is required. Make sure you have Bundler installed as well.
-```ruby
-gem install bundler
-```
-
----
-
-#### Installation
-Get the example from the OpenCensus Ruby repository on Github, and cd into the example application directory.
-
-```
-git clone https://github.com/census-instrumentation/opencensus-ruby.git
-cd opencensus-ruby/examples/hello-world
-```
-
-Install the dependencies using Bundler.
-
-```
-bundle install
-```
-
-#### Running the example
-Run the application locally on your workstation with:
-
-```ruby
-bundle exec ruby hello.rb
-```
-
-This will run on port 4567 by default, and display application logs on the terminal. From a separate shell, you can send requests using a tool such as curl:
-
-```ruby
-curl http://localhost:4567/
-curl http://localhost:4567/lengthy
-```
-The running application will log the captured traces.
-
-#### The example application code
-The example application’s Gemfile includes the **opencensus** gem:
-
-```ruby
-source "https://rubygems.org"
-gem "faraday", "~> 0.14"
-gem "opencensus", "~> 0.3"
-gem "sinatra", "~> 2.0"
-```
-
-Following is the **hello.rb** source file from the example:
-```ruby
-require "sinatra"
-
-# Install the Rack middleware to trace incoming requests.
-require "opencensus/trace/integrations/rack_middleware"
-use OpenCensus::Trace::Integrations::RackMiddleware
-
-# Access the Faraday middleware which will be used to trace outgoing
-# HTTP requests.
-require "opencensus/trace/integrations/faraday_middleware"
-
-# Each request will be traced automatically by the middleware.
-get "/" do
- "Hello world!"
-end
-
-# Traces for this request will also include sub-spans as indicated
-# below.
-get "/lengthy" do
- # Configure this Faraday connection with a middleware to trace
- # outgoing requests.
- conn = Faraday.new(url: "http://www.google.com") do |c|
- c.use OpenCensus::Trace::Integrations::FaradayMiddleware
- c.adapter Faraday.default_adapter
- end
- conn.get "/"
-
- # You may instrument your code to create custom spans for
- # long-running operations.
- OpenCensus::Trace.in_span "long task" do
- sleep rand
- end
-
- "Done!"
-end
-```
diff --git a/content/stats.md b/content/stats.md
deleted file mode 100644
index eb247924..00000000
--- a/content/stats.md
+++ /dev/null
@@ -1,98 +0,0 @@
-+++
-title = "Stats"
-type = "leftnav"
-+++
-
-Application and request metrics are important indicators
-of availibility. Custom metrics can provide insights into
-how availability indicators impact user experience or the business.
-Collected data can help automatically
-generate alerts at an outage or trigger better scheduling
-decisions to scale up a deployment automatically upon high demand.
-
-
-
-Stats collection allows users to collect custom metrics and provide
-a set of predefined metrics through the framework integrations.
-Collected data can be multidimensional and
-it can be filtered and grouped by [tags](/tags).
-
-Stats collection reqires two steps:
-
-* Definition of measures and recording of data points.
-* Definition and registeration of views to aggregate the recorded values.
-
----
-
-## Measures
-
-A measure represents a metric type to be recorded. For example, request latency
-in µs and request size in KBs are examples of measures to collect from a server.
-All measures are identified by a name and also have a description and a unit.
-Libraries and frameworks can define and export measures for their end users to
-collect data on the provided measures.
-
-Below, there is an example measure that represents HTTP latency in ms:
-
-```
-RequestLatency = {
- "http/request_latency",
- "HTTP request latency in microseconds",
- "microsecs",
-}
-```
----
-
-## Recording
-Measurement is a data point to be collected for a measure. For example, for a latency (ms) measure, 100 is a measurement that represents a 100 ms latency event. Users collect data points on the existing measures with the current context. Tags from the current context are recorded with the measurements if they are any.
-
-Recorded measurements are dropped if user is not aggregating them via views. Users don’t necessarily need to conditionally enable/disable recording to reduce cost. Recording of measurements is cheap.
-
-Libraries can record measurements and provide measures,
-and end-users can later decide on which measures
-they want to collect.
-
----
-
-## Views
-
-In order to aggregate measurements and export, users need to define views.
-A view allows recorded measurements to be aggregated with a one of the
-aggregation methods set by the user cumulatively.
-All recorded measurements is broken down by user-provided [tag](/tags) keys.
-
-Following aggregation methods are supported:
-
-* **Count**: The count of the number of measurement points.
-* **Distribution**: Histogram distribution of the measurement points.
-* **Sum**: A sum up of the measurement points.
-* **LastValue**: Keeps the last recorded value, drops everything else.
-
-Users can dynamically create and delete views at runtime. Libraries may
-export their own views and claim the view names by registering them.
-
----
-
-## Sampling
-
-Stats are NOT sampled to be able to represent uncommon
-cases. For example, a [99th percentile latency issue](https://www.youtube.com/watch?v=lJ8ydIuPFeU)
-is rare. Combined with a low sampling rate,
-it might be hard to capture it. This is why stats are not sampled.
-
-On the other hand, exporting every indiviual measurement would
-be very expensive in terms of network traffic. This is why stats
-collection aggregates data in the process and exports only the
-aggregated data.
-
----
-
-## Exporting
-
-Collected and aggregated data can be exported to a stats collection
-backend by registering an exporter.
-
-Multiple exporters can be registered to upload the data to various different backends.
-Users can unregister the exporters if they no longer are needed.
-
-See [exporters](/exporters) to learn more.
diff --git a/content/supported-exporters/C++/_index.md b/content/supported-exporters/C++/_index.md
new file mode 100644
index 00000000..e69de29b
diff --git a/content/supported-exporters/Go/DataDog.md b/content/supported-exporters/Go/DataDog.md
new file mode 100644
index 00000000..b4d12495
--- /dev/null
+++ b/content/supported-exporters/Go/DataDog.md
@@ -0,0 +1,119 @@
+---
+title: "DataDog (Stats and Tracing)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+[DataDog](https://www.datadoghq.com/) is a real-time monitoring system that supports distributed tracing and monitoring.
+
+Its OpenCensus Go exporter is available at [https://godoc.org/github.com/DataDog/opencensus-go-exporter-datadog](https://godoc.org/github.com/DataDog/opencensus-go-exporter-datadog)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your metrics](#viewing-your-metrics)
+- [Viewing your traces](#viewing-your-traces)
+
+##### Creating the exporter
+
+To create the exporter, we'll need:
+* DataDog credentials which you can get from [Here](https://docs.datadoghq.com/getting_started/)
+* Create an exporter in code
+
+This is possible by importing the exporter
+
+{{}}
+import "github.com/DataDog/opencensus-go-exporter-datadog"
+
+// then create the actual exporter
+dd, err := datadog.NewExporter(datadog.Options{})
+if err != nil {
+ log.Fatalf("Failed to create the DataDog exporter: %v", err)
+}
+{{}}
+
+and then to add stats, tracing and then collectively
+
+{{}}
+{{}}
+package main
+
+import (
+ "log"
+
+ "github.com/DataDog/opencensus-go-exporter-datadog"
+ "go.opencensus.io/stats/view"
+)
+
+func main() {
+ dd, err := datadog.NewExporter(datadog.Options{})
+ if err != nil {
+ log.Fatalf("Failed to create the DataDog exporter: %v", err)
+ }
+ // It is imperative to invoke flush before your main function exits
+ defer dd.Stop()
+
+ // Register it as a metrics exporter
+ view.RegisterExporter(dd)
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "log"
+
+ "github.com/DataDog/opencensus-go-exporter-datadog"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ dd, err := datadog.NewExporter(datadog.Options{})
+ if err != nil {
+ log.Fatalf("Failed to create the DataDog exporter: %v", err)
+ }
+ // It is imperative to invoke flush before your main function exits
+ defer dd.Stop()
+
+ // Register it as a metrics exporter
+ trace.RegisterExporter(dd)
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "log"
+
+ "github.com/DataDog/opencensus-go-exporter-datadog"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ dd, err := datadog.NewExporter(datadog.Options{})
+ if err != nil {
+ log.Fatalf("Failed to create the DataDog exporter: %v", err)
+ }
+ // It is imperative to invoke flush before your main function exits
+ defer dd.Stop()
+
+ // Register it as a metrics exporter
+ view.RegisterExporter(sd)
+
+ // Register it as a metrics exporter
+ trace.RegisterExporter(dd)
+}
+{{}}
+{{}}
+
+#### Viewing your metrics
+Please visit [https://docs.datadoghq.com/graphing/](https://docs.datadoghq.com/graphing/)
+
+#### Viewing your traces
+Please visit [https://docs.datadoghq.com/tracing/](https://docs.datadoghq.com/tracing/)
diff --git a/content/supported-exporters/Go/Jaeger.md b/content/supported-exporters/Go/Jaeger.md
new file mode 100644
index 00000000..ce578aaf
--- /dev/null
+++ b/content/supported-exporters/Go/Jaeger.md
@@ -0,0 +1,69 @@
+---
+title: "Jaeger (Tracing)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Jaeger for visualizing your data. For assistance setting up Jaeger, [Click here](/codelabs/jaeger) for a guided codelab.
+{{% /notice %}}
+
+Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies.
+It is used for monitoring and troubleshooting microservices-based distributed systems, including:
+
+* Distributed context propagation
+* Distributed transaction monitoring
+* Root cause analysis
+* Service dependency analysis
+* Performance / latency optimization
+
+OpenCensus Go has support for this exporter available through package [go.opencensus.io/exporter/jaeger](https://godoc.org/go.opencensus.io/exporter/jaeger)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Create an exporter in code
+* Have the Jaeger endpoint available to receive traces
+
+{{}}
+package main
+
+import (
+ "log"
+
+ "go.opencensus.io/exporter/jaeger"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ agentEndpointURI := "localhost:6831"
+ collectorEndpointURI := "http://localhost:9411"
+
+ je, err := jaeger.NewExporter(jaeger.Options{
+ AgentEndpoint: agentEndpointURI,
+ Endpoint: collectorEndpointURI,
+ ServiceName: "demo",
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Jaeger exporter: %v", err)
+ }
+
+ // And now finally register it as a Trace Exporter
+ trace.RegisterExporter(je)
+}
+{{}}
+
+#### Viewing your traces
+Please visit the Jaeger UI endpoint [http://localhost:6831](http://localhost:6831)
+
+#### Project link
+You can find out more about the Jaeger project at [https://www.jaegertracing.io/](https://www.jaegertracing.io/)
diff --git a/content/supported-exporters/Go/Prometheus.md b/content/supported-exporters/Go/Prometheus.md
new file mode 100644
index 00000000..17cc7ee5
--- /dev/null
+++ b/content/supported-exporters/Go/Prometheus.md
@@ -0,0 +1,104 @@
+---
+title: "Prometheus (Stats)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Prometheus for receiving and visualizing your data. For assistance setting up Prometheus, [Click here](/codelabs/prometheus) for a guided codelab.
+{{% /notice %}}
+
+Prometheus is a monitoring system that collects metrics, by scraping
+exposed endpoints at regular intervals, evaluating rule expressions.
+It can also trigger alerts if certain conditions are met.
+
+OpenCensus Go allows exporting stats to Prometheus by means of the Prometheus package
+[go.opencensus.io/exporter/prometheus](https://godoc.org/go.opencensus.io/exporter/prometheus)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Running Prometheus](#running-prometheus)
+- [Viewing your metrics](#viewing-your-metrics)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Import and use the Prometheus exporter package
+* Define a namespace that will uniquely identify our metrics when viewed on Prometheus
+* Expose a port on which we shall run a `/metrics` endpoint
+* With the defined port, we'll need a Promethus configuration file so that Prometheus can scrape from this endpoint
+{{}}
+import "go.opencensus.io/exporter/prometheus"
+
+// Then create the actual exporter
+pe, err := prometheus.NewExporter(prometheus.Options{
+ Namespace: "demo",
+})
+if err != nil {
+ log.Fatalf("Failed to create the Prometheus exporter: %v", err)
+}
+{{}}
+
+An instance of the Prometheus exporter implements [http.Handler](https://golang.org/net/http#Handler)
+so we'll need to expose it on our port of choice say ":8888"
+{{}}
+package main
+
+import (
+ "log"
+ "net/http"
+
+ "go.opencensus.io/exporter/prometheus"
+)
+
+func main() {
+ pe, err := prometheus.NewExporter(prometheus.Options{
+ Namespace: "demo",
+ })
+ if err != nil {
+ log.Fatalf("Failed to create Prometheus exporter: %v", err)
+ }
+ go func() {
+ mux := http.NewServeMux()
+ mux.Handle("/metrics", pe)
+ if err := http.ListenAndServe(":8888", mux); err != nil {
+ log.Fatalf("Failed to run Prometheus /metrics endpoint: %v", err)
+ }
+ }()
+}
+{{}}
+
+and then for our corresponding `prometheus.yaml` file:
+
+```shell
+global:
+ scrape_interval: 10s
+
+ external_labels:
+ monitor: 'demo'
+
+scrape_configs:
+ - job_name: 'demo'
+
+ scrape_interval: 10s
+
+ static_configs:
+ - targets: ['localhost:8888']
+```
+
+##### Running Prometheus
+And then run Prometheus with your configuration
+```shell
+prometheus --config.file=prometheus.yaml
+```
+
+##### Viewing your metrics
+Please visit [http://localhost:9090](http://localhost:9090)
+
+#### Project link
+You can find out more about the Prometheus project at [https://prometheus.io/](https://prometheus.io/)
diff --git a/content/supported-exporters/Go/Stackdriver.md b/content/supported-exporters/Go/Stackdriver.md
new file mode 100644
index 00000000..62a57553
--- /dev/null
+++ b/content/supported-exporters/Go/Stackdriver.md
@@ -0,0 +1,142 @@
+---
+title: "Stackdriver (Stats and Tracing)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+Stackdriver Trace is a distributed tracing system that collects latency data from your applications and displays it in the Google Cloud Platform Console.
+You can track how requests propagate through your application and receive detailed near real-time performance insights.
+Stackdriver Trace automatically analyzes all of your application's traces to generate in-depth latency reports to surface performance degradations,
+and can capture traces from all of your VMs, containers, or Google App Engine projects.
+
+Stackdriver Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications.
+Stackdriver collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services, hosted uptime probes, application instrumentation,
+and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others.
+Stackdriver ingests that data and generates insights via dashboards, charts, and alerts. Stackdriver alerting helps you collaborate by
+integrating with Slack, PagerDuty, HipChat, Campfire, and more.
+
+OpenCensus Go has support for this exporter available through package [contrib.go.opencensus.io/exporter/stackdriver](https://godoc.org/contrib.go.opencensus.io/exporter/stackdriver)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your metrics](#viewing-your-metrics)
+- [Viewing your traces](#viewing-your-traces)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Have a GCP Project ID
+* Create an exporter in code
+
+{{}}
+import "contrib.go.opencensus.io/exporter/stackdriver"
+
+// Then create the actual exporter
+sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: "demo-project-id",
+})
+if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+}
+{{}}
+
+{{}}
+{{}}
+package main
+
+import (
+ "log"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/stats/view"
+)
+
+func main() {
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: "demo-project-id",
+ // MetricPrefix helps uniquely identify your metrics.
+ MetricPrefix: "demo-prefix",
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+ }
+ // It is imperative to invoke flush before your main function exits
+ defer sd.Flush()
+
+ // Register it as a metrics exporter
+ view.RegisterExporter(sd)
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "log"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: "demo-project-id",
+ // MetricPrefix helps uniquely identify your metrics.
+ MetricPrefix: "demo-prefix",
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+ }
+ // It is imperative to invoke flush before your main function exits
+ defer sd.Flush()
+
+ // Register it as a trace exporter
+ trace.RegisterExporter(sd)
+}
+{{}}
+
+{{}}
+package main
+
+import (
+ "log"
+
+ "contrib.go.opencensus.io/exporter/stackdriver"
+ "go.opencensus.io/stats/view"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ sd, err := stackdriver.NewExporter(stackdriver.Options{
+ ProjectID: "demo-project-id",
+ // MetricPrefix helps uniquely identify your metrics.
+ MetricPrefix: "demo-prefix",
+ })
+ if err != nil {
+ log.Fatalf("Failed to create the Stackdriver exporter: %v", err)
+ }
+ // It is imperative to invoke flush before your main function exits
+ defer sd.Flush()
+
+ // Register it as a metrics exporter
+ view.RegisterExporter(sd)
+
+ // Register it as a trace exporter
+ trace.RegisterExporter(sd)
+}
+{{}}
+{{}}
+
+#### Viewing your metrics
+Please visit [https://console.cloud.google.com/monitoring](https://console.cloud.google.com/monitoring)
+
+#### Viewing your traces
+Please visit [https://console.cloud.google.com/traces/traces](https://console.cloud.google.com/traces/traces)
diff --git a/content/supported-exporters/Go/XRay.md b/content/supported-exporters/Go/XRay.md
new file mode 100644
index 00000000..1ccdd93c
--- /dev/null
+++ b/content/supported-exporters/Go/XRay.md
@@ -0,0 +1,64 @@
+---
+title: "AWS X-Ray (Tracing)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+AWS X-Ray is a distributed trace collection and analysis system from Amazon Web Services.
+
+Its support is available by means of the X-Ray package [https://godoc.org/github.com/census-instrumentation/opencensus-go-exporter-aws](https://godoc.org/github.com/census-instrumentation/opencensus-go-exporter-aws)
+
+#### Table of contents
+- [Requirements](#requirements)
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+
+
+##### Requirements
+You'll need to have an AWS Developer account, if you haven't yet, please visit
+In case you haven't yet enabled AWS X-Ray, please visit [https://console.aws.amazon.com/xray/home](https://console.aws.amazon.com/xray/home)
+
+##### Creating the exporter
+
+This is possible by importing
+
+{{}}
+import xray "github.com/census-instrumentation/opencensus-go-exporter-aws"
+
+// Then create the actual exporter
+xe, err := xray.NewExporter(xray.WithVersion("latest"))
+if err != nil {
+ log.Fatalf("Failed to create the AWS X-Ray exporter: %v", err)
+}
+{{}}
+
+Then finally register it as a trace exporter, to collectively give
+{{}}
+package main
+
+import (
+ "log"
+
+ xray "github.com/census-instrumentation/opencensus-go-exporter-aws"
+ "go.opencensus.io/trace"
+)
+
+func main() {
+ xe, err := xray.NewExporter(xray.WithVersion("latest"))
+ if err != nil {
+ log.Fatalf("Failed to create the AWS X-Ray exporter: %v", err)
+ }
+ // It is imperative that your exporter invokes Flush before your program exits!
+ defer xe.Flush()
+
+ trace.RegisterExporter(xe)
+}
+{{}}
+
+
+##### Viewing your traces
+Please visit [https://console.aws.amazon.com/xray/home](https://console.aws.amazon.com/xray/home)
diff --git a/content/supported-exporters/Go/Zipkin.md b/content/supported-exporters/Go/Zipkin.md
new file mode 100644
index 00000000..a99e8116
--- /dev/null
+++ b/content/supported-exporters/Go/Zipkin.md
@@ -0,0 +1,67 @@
+---
+title: "Zipkin (Tracing)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Zipkin for visualizing your data. For assistance setting up Zipkin, [Click here](/codelabs/zipkin) for a guided codelab.
+{{% /notice %}}
+
+Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.
+
+It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper.
+
+OpenCensus Go has support for this exporter available through package [go.opencensus.io/exporter/zipkin](https://godoc.org/go.opencensus.io/exporter/zipkin)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Create an exporter in code
+* Have the Zipkin endpoint available to receive traces
+
+{{}}
+package main
+
+import (
+ "log"
+
+ "go.opencensus.io/exporter/zipkin"
+ "go.opencensus.io/trace"
+
+ openzipkin "github.com/openzipkin/zipkin-go"
+ zipkinHTTP "github.com/openzipkin/zipkin-go/reporter/http"
+)
+
+func main() {
+ localEndpointURI := "192.168.1.5:5454"
+ reporterURI := "http://localhost:9411/api/v2/spans"
+ serviceName := "server"
+
+ localEndpoint, err := openzipkin.NewEndpoint(serviceName, localEndpointURI)
+ if err != nil {
+ log.Fatalf("Failed to create Zipkin localEndpoint with URI %q error: %v", localEndpointURI, err)
+ }
+
+ reporter := zipkinHTTP.NewReporter(reporterURI)
+ ze := zipkin.NewExporter(reporter, localEndpoint)
+
+ // And now finally register it as a Trace Exporter
+ trace.RegisterExporter(ze)
+}
+{{}}
+
+#### Viewing your traces
+Please visit the Zipkin UI endpoint [http://localhost:9411](http://localhost:9411)
+
+#### Project link
+You can find out more about the Zipkin project at [https://zipkin.io/](https://zipkin.io/)
diff --git a/content/supported-exporters/Go/_index.md b/content/supported-exporters/Go/_index.md
new file mode 100644
index 00000000..a45d6815
--- /dev/null
+++ b/content/supported-exporters/Go/_index.md
@@ -0,0 +1,18 @@
+---
+title: "Go"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 2
+class: "resized-logo"
+---
+
+
+
+OpenCensus Go provides support for various exporters like:
+
+* [AWS X-Ray](/supported-exporters/go/xray/)
+* [Google Stackdriver Tracing and Monitoring](/supported-exporters/go/stackdriver/)
+* [DataDog APM and Tracing](/supported-exporters/go/datadog/)
+* [Prometheus Monitoring](/supported-exporters/go/prometheus/)
+* [Zipkin](/supported-exporters/go/zipkin)
+* [Jaeger](/supported-exporters/go/jaeger)
diff --git a/content/supported-exporters/Java/Instana.md b/content/supported-exporters/Java/Instana.md
new file mode 100644
index 00000000..b664c3b7
--- /dev/null
+++ b/content/supported-exporters/Java/Instana.md
@@ -0,0 +1,71 @@
+---
+title: "Instana (Tracing)"
+date: 2018-07-21T18:52:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+Instant provides AI Powered Application and Infrastructure Monitoring, allowing you to
+deliver Faster With Confidence, and automatic Analysis and Optimization.
+
+OpenCensus Java has support for this exporter available through package [io.opencensus.exporter.trace.instana](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-trace-instana)
+
+More information can be found at the [Instana website](https://www.instana.com/)
+
+#### Table of contents
+- [Creating the exporters](#creating-the-exporters)
+- [Viewing your traces](#viewing-your-traces)
+
+##### Creating the trace exporter
+To create the trace exporter, you'll need to:
+
+* Have Instana credentials
+* Use Maven setup your pom.xml file
+* Create the exporter in code
+
+##### pom.xml
+
+```xml
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-exporter-trace-instana
+ ${opencensus.version}
+
+
+```
+
+##### Creating the exporter in code
+
+{{}}
+package io.opencensus.tutorial.instana;
+
+import io.opencensus.exporter.trace.instana.InstanaTraceExporter;
+
+public class InstanaTutorial {
+ public static void main(String ...args) {
+ String agentEndpointURI = "http://localhost:42699/com.instana.plugin.generic.trace";
+ InstanaTraceExporter.createAndRegister(agentEndpointURI);
+ }
+}
+{{}}
diff --git a/content/supported-exporters/Java/Jaeger.md b/content/supported-exporters/Java/Jaeger.md
new file mode 100644
index 00000000..df76e39c
--- /dev/null
+++ b/content/supported-exporters/Java/Jaeger.md
@@ -0,0 +1,81 @@
+---
+title: "Jaeger (Tracing)"
+date: 2018-07-22T17:35:15-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Jaeger for visualizing your data. For assistance setting up Jaeger, [Click here](/codelabs/jaeger) for a guided codelab.
+{{% /notice %}}
+
+Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies.
+It is used for monitoring and troubleshooting microservices-based distributed systems, including:
+
+* Distributed context propagation
+* Distributed transaction monitoring
+* Root cause analysis
+* Service dependency analysis
+* Performance / latency optimization
+
+OpenCensus Java has support for this exporter available through package [io.opencensus.exporter.trace.jaeger](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-trace-jaeger)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Create an exporter in code
+* Have the Jaeger endpoint available to receive traces
+
+#### pom.xml
+If using Maven, add these to your pom.xml file
+```xml
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+ io.opencensus
+ opencensus-exporter-trace-jaeger
+ ${opencensus.version}
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+ runtime
+
+
+```
+
+{{}}
+package io.opencensus.tutorial.jaeger;
+
+import io.opencensus.exporter.trace.jaeger.JaegerTraceExporter;
+
+public class JaegerTutorial {
+ public static void main(String ...args) throws Exception {
+ JaegerTraceExporter.createAndRegister("http://127.0.0.1:14268/api/traces", "service-b");
+ }
+}
+{{}}
+
+#### Viewing your traces
+Please visit the Jaeger UI endpoint [http://localhost:6831](http://localhost:6831)
+
+#### Project link
+You can find out more about the Jaeger project at [https://www.jaegertracing.io/](https://www.jaegertracing.io/)
diff --git a/content/supported-exporters/Java/Prometheus.md b/content/supported-exporters/Java/Prometheus.md
new file mode 100644
index 00000000..2a5541d4
--- /dev/null
+++ b/content/supported-exporters/Java/Prometheus.md
@@ -0,0 +1,121 @@
+---
+title: "Prometheus (Stats)"
+date: 2018-07-22T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Prometheus for receiving and visualizing your data. For assistance setting up Prometheus, [Click here](/codelabs/prometheus) for a guided codelab.
+{{% /notice %}}
+
+Prometheus is a monitoring system that collects metrics, by scraping
+exposed endpoints at regular intervals, evaluating rule expressions.
+It can also trigger alerts if certain conditions are met.
+
+OpenCensus Java allows exporting stats to Prometheus by means of the Prometheus package
+[io.opencensus.exporter.stats.prometheus](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-stats-prometheus)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Running Prometheus](#running-prometheus)
+- [Viewing your metrics](#viewing-your-metrics)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Import and use the Prometheus exporter package
+* Define a namespace that will uniquely identify our metrics when viewed on Prometheus
+* Expose a port on which we shall run a `/metrics` endpoint
+* With the defined port, we'll need a Promethus configuration file so that Prometheus can scrape from this endpoint
+
+
+#### pom.xml
+
+Add this to your pom.xml file:
+
+```xml
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-exporter-stats-prometheus
+ ${opencensus.version}
+
+
+
+ prometheus
+ simpleclient_httpserver
+ 0.3.0
+
+
+```
+
+We also need to expose the Prometheus endpoint say on address "localhost:8888".
+
+{{}}
+package io.opencensus.tutorial.prometheus;
+
+import io.opencensus.exporter.stats.prometheus.PrometheusStatsExporter;
+import io.prometheus.client.exporter.HTTPServer;
+
+public class PrometheusTutorial {
+ public static void main(String ...args) {
+ // Register the Prometheus exporter
+ PrometheusStatsExporter.createAndRegister();
+
+ // Run the server as a daemon on address "localhost:8888"
+ HTTPServer server = new HTTPServer("localhost", 8888, true);
+ }
+}
+{{}}
+
+and then for our corresponding `prometheus.yaml` file:
+
+```shell
+global:
+ scrape_interval: 10s
+
+ external_labels:
+ monitor: 'demo'
+
+scrape_configs:
+ - job_name: 'demo'
+
+ scrape_interval: 10s
+
+ static_configs:
+ - targets: ['localhost:8888']
+```
+
+##### Running Prometheus
+And then run Prometheus with your configuration
+```shell
+prometheus --config.file=prometheus.yaml
+```
+
+##### Viewing your metrics
+Please visit [http://localhost:9090](http://localhost:9090)
+
+#### Project link
+You can find out more about the Prometheus project at [https://prometheus.io/](https://prometheus.io/)
diff --git a/content/supported-exporters/Java/SignalFx.md b/content/supported-exporters/Java/SignalFx.md
new file mode 100644
index 00000000..4c2d9d72
--- /dev/null
+++ b/content/supported-exporters/Java/SignalFx.md
@@ -0,0 +1,79 @@
+---
+title: "SignalFx (Stats)"
+date: 2018-07-22T16:58:03-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of SignalFx.
+You'll need to have:
+
+* A [SignalFx account](https://signalfx.com/)
+* The corresponding [data ingest token](https://docs.signalfx.com/en/latest/admin-guide/tokens.html)
+{{% /notice %}}
+
+SignalFx is a real-time monitoring solution for cloud and distributed applications.
+SignalFx ingests that data and offers various visualizations on charts, dashboards and service maps,
+as well as real-time anomaly detection.
+
+OpenCensus Java has support for this exporter available through the package:
+
+* Stats [io.opencensus.exporter.stats.signalfx](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-stats-signalfx)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+
+##### pom.xml
+
+```xml
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+ io.opencensus
+ opencensus-exporter-stats-signalfx
+ ${opencensus.version}
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+ runtime
+
+
+```
+
+#### Creating the exporter in code
+
+{{}}
+package io.opencensus.tutorial.signalfx;
+
+import io.opencensus.common.Duration;
+import io.opencensus.exporter.stats.stackdriver.SignalFxStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.SignalFxStatsExporter;
+
+public class SignalFxTutorial {
+ public static void main(String ...args) {
+ String signalFxToken = "";
+
+ SignalFxStatsExporter.create(
+ SignalFxStatsConfiguration.builder()
+ .setToken(signalFxToken)
+ .setExportInterval(Duration.create(3, 2))
+ .build();
+ );
+ }
+}
+{{}}
diff --git a/content/supported-exporters/Java/Stackdriver.md b/content/supported-exporters/Java/Stackdriver.md
new file mode 100644
index 00000000..cdf97423
--- /dev/null
+++ b/content/supported-exporters/Java/Stackdriver.md
@@ -0,0 +1,154 @@
+---
+title: "Stackdriver (Stats and Tracing)"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+Stackdriver Trace is a distributed tracing system that collects latency data from your applications and displays it in the Google Cloud Platform Console.
+You can track how requests propagate through your application and receive detailed near real-time performance insights.
+Stackdriver Trace automatically analyzes all of your application's traces to generate in-depth latency reports to surface performance degradations,
+and can capture traces from all of your VMs, containers, or Google App Engine projects.
+
+Stackdriver Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications.
+Stackdriver collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services, hosted uptime probes, application instrumentation,
+and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others.
+Stackdriver ingests that data and generates insights via dashboards, charts, and alerts. Stackdriver alerting helps you collaborate by
+integrating with Slack, PagerDuty, HipChat, Campfire, and more.
+
+OpenCensus Java has support for this exporter available through packages:
+* Stats [io.opencensus.exporter.stats.stackdriver](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-stats-stackdriver)
+* Trace [io.opencensus.exporter.trace.stackdriver](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-trace-stackdriver)
+
+#### Table of contents
+- [Creating the exporters](#creating-the-exporters)
+- [Viewing your metrics](#viewing-your-metrics)
+- [Viewing your traces](#viewing-your-traces)
+
+##### Creating the exporters
+To create the exporters, you'll need to:
+
+* Have a GCP Project ID
+* Have already enabled Stackdriver Tracing and Metrics, if not, please visit the [Code lab](/codelabs/stackdriver)
+* Use Maven setup your pom.xml file
+* Create the exporters in code
+
+##### pom.xml
+
+```xml
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-exporter-stats-stackdriver
+ ${opencensus.version}
+
+
+
+ io.opencensus
+ opencensus-exporter-trace-stackdriver
+ ${opencensus.version}
+
+
+```
+
+##### Creating the exporters in code
+* To enable each of the respective APIs, please click on the respective tabs and then on "ALL" at the end
+
+{{}}
+{{}}
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+{{}}
+
+{{}}
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+
+StackdriverStatsExporter.createAndRegister(
+ StackdriverStatsConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+{{}}
+{{}}
+
+
+##### All exporters combined
+* Finally, combining tracing and stats exporters together, we should now have
+
+{{}}
+package io.opencensus.tutorial.stackdriver;
+
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
+import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
+import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
+
+public class StackdriverTutorial {
+ public static void main(String ...args) {
+ String gcpProjectId = envOrAlternative("GCP_PROJECT_ID");
+
+ // The trace exporter
+ StackdriverTraceExporter.createAndRegister(
+ StackdriverTraceConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+
+ // The stats exporter
+ StackdriverStatsExporter.createAndRegister(
+ StackdriverStatsConfiguration.builder()
+ .setProjectId(gcpProjectId)
+ .build());
+ }
+
+ private static String envOrAlternative(String key, String ...alternatives) {
+ String value = System.getenv().get(key);
+ if (value != null && value != "")
+ return value;
+
+ // Otherwise now look for the alternatives.
+ for (String alternative : alternatives) {
+ if (alternative != null && alternative != "") {
+ value = alternative;
+ break;
+ }
+ }
+
+ return value;
+ }
+}
+{{}}
+
+#### Viewing your metrics
+Please visit [https://console.cloud.google.com/monitoring](https://console.cloud.google.com/monitoring)
+
+#### Viewing your traces
+Please visit [https://console.cloud.google.com/traces/traces](https://console.cloud.google.com/traces/traces)
diff --git a/content/supported-exporters/Java/Zipkin.md b/content/supported-exporters/Java/Zipkin.md
new file mode 100644
index 00000000..0b5789b7
--- /dev/null
+++ b/content/supported-exporters/Java/Zipkin.md
@@ -0,0 +1,76 @@
+---
+title: "Zipkin (Tracing)"
+date: 2018-07-22T17:20:12-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Zipkin for visualizing your data. For assistance setting up Zipkin, [Click here](/codelabs/zipkin) for a guided codelab.
+{{% /notice %}}
+
+Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.
+
+It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper.
+
+OpenCensus Java has support for this exporter available through package [io.opencensus.exporter.trace.zipkin](https://www.javadoc.io/doc/io.opencensus/opencensus-exporter-trace-zipkin)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Create an exporter in code
+* Have the Zipkin endpoint available to receive traces
+
+#### pom.xml
+If using Maven, add these to your pom.xml file
+```xml
+
+ UTF-8
+ 0.14.0
+
+
+
+
+ io.opencensus
+ opencensus-api
+ ${opencensus.version}
+
+
+ io.opencensus
+ opencensus-exporter-trace-zipkin
+ ${opencensus.version}
+
+
+ io.opencensus
+ opencensus-impl
+ ${opencensus.version}
+ runtime
+
+
+```
+
+{{}}
+package io.opencensus.tutorial.zipkin;
+
+import io.opencensus.exporter.trace.zipkin.ZipkinTraceExporter;
+
+public class ZipkinTutorial {
+ public static void main(String ...args) throws Exception {
+ ZipkinTraceExporter.createAndRegister("http://localhost:9411/api/v2/spans", "service-a");
+ }
+}
+{{}}
+
+#### Viewing your traces
+Please visit the Zipkin UI endpoint [http://localhost:9411](http://localhost:9411)
+
+#### Project link
+You can find out more about the Zipkin project at [https://zipkin.io/](https://zipkin.io/)
diff --git a/content/supported-exporters/Java/_index.md b/content/supported-exporters/Java/_index.md
new file mode 100644
index 00000000..3f6c71f4
--- /dev/null
+++ b/content/supported-exporters/Java/_index.md
@@ -0,0 +1,24 @@
+---
+title: "Java"
+date: 2018-07-21T18:40:01-02:00
+draft: false
+weight: 2
+class: "resized-logo"
+---
+
+
+
+For full reference, you can visit:
+
+* JavaDoc [https://www.javadoc.io/doc/io.opencensus/opencensus-api](https://www.javadoc.io/doc/io.opencensus/opencensus-api)
+* Github repository [https://github.com/census-instrumentation/opencensus-java](https://github.com/census-instrumentation/opencensus-java)
+
+OpenCensus Java provides support for various exporters like:
+
+* [Google Stackdriver Tracing and Monitoring](/supported-exporters/java/stackdriver)
+* [Instana](/supported-exporters/java/instana)
+* [Prometheus Monitoring](/supported-exporters/java/prometheus)
+* [Zipkin](/supported-exporters/java/zipkin)
+* [Jaeger](/supported-exporters/java/jaeger)
+* [SignalFX](/supported-exporters/java/signalfx)
+* [Logging](/supported-exporters/java/logging)
diff --git a/content/supported-exporters/Node.js/_index.md b/content/supported-exporters/Node.js/_index.md
new file mode 100644
index 00000000..e69de29b
diff --git a/content/supported-exporters/PHP/_index.md b/content/supported-exporters/PHP/_index.md
new file mode 100644
index 00000000..e69de29b
diff --git a/content/supported-exporters/Python/Jaeger.md b/content/supported-exporters/Python/Jaeger.md
new file mode 100644
index 00000000..5872c672
--- /dev/null
+++ b/content/supported-exporters/Python/Jaeger.md
@@ -0,0 +1,63 @@
+---
+title: "Jaeger (Tracing)"
+date: 2018-07-22T23:33:31-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Jaeger for visualizing your data. For assistance setting up Jaeger, [Click here](/codelabs/jaeger) for a guided codelab.
+{{% /notice %}}
+
+Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies.
+It is used for monitoring and troubleshooting microservices-based distributed systems, including:
+
+* Distributed context propagation
+* Distributed transaction monitoring
+* Root cause analysis
+* Service dependency analysis
+* Performance / latency optimization
+
+OpenCensus Python has support for this exporter available through package [opencensus.trace.exporters.jaeger_exporter](https://github.com/census-instrumentation/opencensus-python/blob/master/opencensus/trace/exporters/jaeger_exporter.py)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Create an exporter in code
+* Have the Jaeger endpoint available to receive traces
+
+{{}}
+#!/usr/bin/env python
+
+from opencensus.trace.exporters.jaeger_exporter import JaegerExporter
+from opencensus.trace.tracer import Tracer
+
+def main():
+ je = JaegerExporter(service_name="service-b",
+ host_name='localhost',
+ agent_port=6831,
+ endpoint='/api/traces')
+
+ tracer = Tracer(exporter=je)
+ with tracer.span(name='doingWork') as span:
+ for i in range(10):
+ continue
+
+if __name__ == "__main__":
+ main()
+{{}}
+
+
+#### Viewing your traces
+Please visit the Jaeger UI endpoint [http://localhost:6831](http://localhost:6831)
+
+#### Project link
+You can find out more about the Jaeger project at [https://www.jaegertracing.io/](https://www.jaegertracing.io/)
diff --git a/content/supported-exporters/Python/Stackdriver.md b/content/supported-exporters/Python/Stackdriver.md
new file mode 100644
index 00000000..593891d9
--- /dev/null
+++ b/content/supported-exporters/Python/Stackdriver.md
@@ -0,0 +1,68 @@
+---
+title: "Stackdriver (Tracing)"
+date: 2018-07-22T23:47:14-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Stackdriver for visualizing your data. For assistance setting up Stackdriver, [Click here](/codelabs/stackdriver) for a guided codelab.
+{{% /notice %}}
+
+Stackdriver Trace is a distributed tracing system that collects latency data from your applications and displays it in the Google Cloud Platform Console.
+You can track how requests propagate through your application and receive detailed near real-time performance insights.
+Stackdriver Trace automatically analyzes all of your application's traces to generate in-depth latency reports to surface performance degradations,
+and can capture traces from all of your VMs, containers, or Google App Engine projects.
+
+Stackdriver Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications.
+Stackdriver collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services, hosted uptime probes, application instrumentation,
+and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others.
+Stackdriver ingests that data and generates insights via dashboards, charts, and alerts. Stackdriver alerting helps you collaborate by
+integrating with Slack, PagerDuty, HipChat, Campfire, and more.
+
+OpenCensus Python has support for this exporter available through package:
+* Trace [opencensus.trace.exporters.stackdriver_exporter](https://census-instrumentation.github.io/opencensus-python/trace/api/stackdriver_exporter.html)
+
+#### Table of contents
+- [Creating the exporters](#creating-the-exporters)
+- [Viewing your traces](#viewing-your-traces)
+
+##### Creating the exporters
+To create the exporters, you'll need to:
+
+* Have a GCP Project ID
+* Have already enabled Stackdriver Tracing and Metrics, if not, please visit the [Code lab](/codelabs/stackdriver)
+* Create the exporters in code
+
+##### Creating the exporter in code
+{{}}
+#!/usr/bin/env python
+
+import os
+
+from opencensus.trace.tracer import Tracer
+from opencensus.trace.exporters import stackdriver_exporter
+from opencensus.trace.exporters.transports.background_thread import BackgroundThreadTransport
+
+def main():
+ sde = stackdriver_exporter.StackdriverExporter(
+ project_id=os.environ.get("GCP_PROJECT_ID"),
+ transport=BackgroundThreadTransport)
+
+ tracer = Tracer(exporter=sde)
+ with tracer.span(name='doingWork') as span:
+ for i in range(10):
+ continue
+
+if __name__ == "__main__":
+ main()
+{{}}
+
+#### Viewing your metrics
+Please visit [https://console.cloud.google.com/monitoring](https://console.cloud.google.com/monitoring)
+
+#### Viewing your traces
+Please visit [https://console.cloud.google.com/traces/traces](https://console.cloud.google.com/traces/traces)
diff --git a/content/supported-exporters/Python/Zipkin.md b/content/supported-exporters/Python/Zipkin.md
new file mode 100644
index 00000000..c7861996
--- /dev/null
+++ b/content/supported-exporters/Python/Zipkin.md
@@ -0,0 +1,57 @@
+---
+title: "Zipkin (Tracing)"
+date: 2018-07-22T23:12:15-07:00
+draft: false
+weight: 3
+class: "resized-logo"
+---
+
+
+
+{{% notice note %}}
+This guide makes use of Zipkin for visualizing your data. For assistance setting up Zipkin, [Click here](/codelabs/zipkin) for a guided codelab.
+{{% /notice %}}
+
+Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.
+
+It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper.
+
+OpenCensus Python has support for this exporter available through package [opencensus.trace.exporters.zipkin_exporter](https://census-instrumentation.github.io/opencensus-python/trace/api/zipkin_exporter.html)
+
+#### Table of contents
+- [Creating the exporter](#creating-the-exporter)
+- [Viewing your traces](#viewing-your-traces)
+- [Project link](#project-link)
+
+##### Creating the exporter
+To create the exporter, we'll need to:
+
+* Create an exporter in code
+* Have the Zipkin endpoint available to receive traces
+
+{{}}
+#!/usr/bin/env python
+
+from opencensus.trace.exporters.zipkin_exporter import ZipkinExporter
+from opencensus.trace.tracer import Tracer
+
+def main():
+ ze = ZipkinExporter(service_name="service-a",
+ host_name='localhost',
+ port=9411,
+ endpoint='/api/v2/spans')
+
+ tracer = Tracer(exporter=ze)
+ with tracer.span(name='doingWork') as span:
+ for i in range(10):
+ continue
+
+if __name__ == "__main__":
+ main()
+{{}}
+
+#### Viewing your traces
+Please visit the Zipkin UI endpoint [http://localhost:9411](http://localhost:9411)
+
+#### Project link
+You can find out more about the Zipkin project at [https://zipkin.io/](https://zipkin.io/)
diff --git a/content/supported-exporters/Python/_index.md b/content/supported-exporters/Python/_index.md
new file mode 100644
index 00000000..4f9355ee
--- /dev/null
+++ b/content/supported-exporters/Python/_index.md
@@ -0,0 +1,15 @@
+---
+title: "Python"
+date: 2018-07-22T23:12:15-07:00
+draft: false
+weight: 2
+class: "resized-logo"
+---
+
+
+
+OpenCensus Python provides support for various exporters like:
+
+* [Google Stackdriver Tracing](/supported-exporters/python/stackdriver/)
+* [Jaeger](/supported-exporters/python/jaeger)
+* [Zipkin](/supported-exporters/python/zipkin)
diff --git a/content/supported-exporters/Ruby/_index.md b/content/supported-exporters/Ruby/_index.md
new file mode 100644
index 00000000..e69de29b
diff --git a/content/supported-exporters/_index.md b/content/supported-exporters/_index.md
new file mode 100644
index 00000000..37464cb4
--- /dev/null
+++ b/content/supported-exporters/_index.md
@@ -0,0 +1,9 @@
+---
+title: "Supported Exporters"
+date: 2018-07-21T14:27:35-07:00
+draft: false
+weight: 50
+---
+
+OpenCensus' language implementations have support for various exporters.
+If you don't yet know what an exporter is, please visit [exporter](/core-concepts/exporters)
diff --git a/content/supported-transports/_index.md b/content/supported-transports/_index.md
new file mode 100644
index 00000000..54c0d8df
--- /dev/null
+++ b/content/supported-transports/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Supported Transports"
+date: 2018-07-16T14:29:35-07:00
+draft: false
+weight: 60
+---
+
+{{% children %}}
diff --git a/content/supported-transports/grpc.md b/content/supported-transports/grpc.md
new file mode 100644
index 00000000..3d04be50
--- /dev/null
+++ b/content/supported-transports/grpc.md
@@ -0,0 +1,19 @@
+---
+title: "gRPC"
+date: 2018-07-16T14:29:38-07:00
+draft: false
+class: "resized-logo"
+---
+
+
+
+gRPC is a high performance, open-source universal RPC framework.
+
+Out of the box, OpenCensus is integrated with gRPC in the following languages:
+
+Language|Integration|Resource
+---|---|---
+C++|grpc|https://github.com/census-instrumentation/opencensus-cpp/tree/master/examples/grpc
+Go|grpc-go|https://medium.com/@orijtech/opencensus-for-go-grpc-developers-7f3ee1ac3d6d
+Java|grpc-java|https://medium.com/@orijtech/opencensus-for-java-grpc-developers-23c25de0a057
+Python|grpc-python|https://medium.com/@orijtech/opencensus-for-python-grpc-developers-9e460e054395
diff --git a/content/supported-transports/http.md b/content/supported-transports/http.md
new file mode 100644
index 00000000..661eb971
--- /dev/null
+++ b/content/supported-transports/http.md
@@ -0,0 +1,5 @@
+---
+title: "HTTP"
+date: 2018-07-16T14:29:41-07:00
+draft: false
+---
diff --git a/content/traces.md b/content/traces.md
deleted file mode 100644
index ad23aeed..00000000
--- a/content/traces.md
+++ /dev/null
@@ -1,140 +0,0 @@
-+++
-title = "Traces"
-type = "leftnav"
-date = "2018-05-30T15:37:24-05:00"
-+++
-
-A trace tracks the progression of a single user request
-as it is handled by other services that make up an application.
-
-Each unit work is called a span in a trace. Spans include metadata about the work,
-including the time spent in the step (latency) and status.
-You can use tracing to debug errors and
-latency issues of your application.
-
----
-
-## Spans
-
-A trace is a tree of spans.
-
-A span is the unit of work represented in a trace. A span may
-represent a HTTP request, an RPC, a server handler,
-a database query or a section customly marked in user code.
-
-
-
-Above, you see a trace with various spans. In order to respond
-to `/messages`, several other internal requests are made. First,
-we are checking if the user is authenticated, we are trying to
-get the results from the cache. It is a cache miss, hence we
-query the database for the results, we cache the results back,
-and respond back to the user.
-
-There are two types of spans:
-
-* **Root span**: Root spans don't have a parent span. They are the
- first span. `/messages` span above is a root span.
-* **Child span**: Child spans have an existing span as their parent.
-
-
-Spans are identified with an ID and are associated to a trace.
-These identifiers and options byte together are called span context.
-Inside the same process, span context is propagated in a context
-object. When crossing process boundaries, it is serialized into
-protocol headers. The receiving end can read the span context
-and create child spans.
-
-### Name
-
-Span names represent what span does. Span names should
-be statistically meaningful. Most tracing backend and analysis
-tools use span names to auto generate reports for the
-represented work.
-
-Examples of span names:
-
-* "cache.Get" represents the Get method of the cache service.
-* "/messages" represents the messages web page.
-* "/api/user/(\\d+)" represents the user detail pages.
-
-### Status
-
-Status represents the current state of the span.
-It is represented by a canonical status code which maps onto a
-predefined set of error values and an optional string message.
-
-Status allows tracing visualization tools to highlight
-unsuccessful spans and helps tracing users to debug errors.
-
-
-
-Above, you can see `cache.Put` is errored because of the
-violation of the key size limit. As a result of this error,
- `/messages` request responded with an error to the user.
-
-### Annotations
-
-Annotations are timestamped strings with optional attributes.
-Annotations are used like log lines, but the log is per-Span.
-
-Example annotations:
-
-* 0.001s Evaluating database failover rules.
-* 0.002s Failover replica selected. attributes:{replica:ab_001 zone:xy}
-* 0.006s Response received.
-* 0.007s Response requires additional lookups. attributes:{fanout:4}
-
-Annotations provide rich details to debug problems in the scope of a span.
-
-### Attributes
-
-Attributes are additional information that is included in the
-span which can represent arbitrary data assigned by the user.
-They are key-value pairs with the key being a string and the
-value being either a string, boolean, or integer.
-
-Examples of attributes:
-
-* {http_code: 200}
-* {zone: "us-central2"}
-* {container_id: "replica04ed"}
-
-Attributes can be used to query the tracing data and allow
-users to filter large volumes of tracing data. For example, you can
-filter the traces by HTTP status code or availability zone by
-using the example attributes above.
-
----
-
-## Sampling
-
-Trace data is often very large in size and is expensive to collect.
-This is why rather than collecting traces for every request, downsampling
-is prefered. By default, OpenCensus provides a probabilistic sampler that
-will trace once in every 10,000 requests.
-
-You can set a custom probablistic sampler, prefer to always sample or
-not sample at all.
-There are two ways to set samplers:
-
-* **Global sampler**: Global sampler is the global default.
-* **Span sampler**: When starting a new span, a custom
- sampler can be provided. If no custom sampling is
- provided, global sampler is used. Span samplers are
- useful if you want to over-sample some sections of your
- code. For example, a low throughput background service
- may use a higher sampling rate than a high-load RPC
- server.
-
----
-
-## Exporting
-
-Recorded spans will be reported by the registered exporters.
-
-Multiple exporters can be registered to upload the data to
-various different backends. Users can unregister the exporters
-if they no longer are needed.
-
-See [exporters](/exporters) to learn more.
diff --git a/content/zpages.md b/content/zpages.md
deleted file mode 100644
index c0244f3a..00000000
--- a/content/zpages.md
+++ /dev/null
@@ -1,68 +0,0 @@
-+++
-title = "Z-Pages"
-type = "leftnav"
-+++
-
-OpenCensus provides in-process web pages that displays
-collected data from the process. These pages are called z-pages
-and they are useful to see collected data from a specific process
-without having to depend on any metric collection or
-distributed tracing backend.
-
-Z-Pages can be useful during the development time or when
-the process to be inspected is known in production.
-Z-Pages can also be used to debug [exporter](/exporters) issues.
-
-In order to serve Z-pages, register their handlers and
-start a web server. Below, there is an example how to
-serve these pages from `127.0.0.1:7777/debug`.
-
-
-{{% snippets %}}
-{{% go %}}
-```
-import "go.opencensus.io/zpages"
-
-zpages.Handle(nil, "/debug")
-log.Fatal(http.ListenAndServe("127.0.0.1:7777", nil))
-```
-{{% /go %}}
-{{% java %}}
-```
-// Add the dependencies by following the instructions at
-// https://github.com/census-instrumentation/opencensus-java/tree/master/contrib/zpages
-
-ZPageHandlers.startHttpServerAndRegisterAll(7777);
-```
-{{% /java %}}
-{{% /snippets %}}
-
-Once handler is registered, there are various pages provided
-from the libraries:
-
-* [127.0.0.1:7777/debug/rpcz](http://127.0.0.1:7777/debug/rpcz)
-* [127.0.0.1:7777/debug/tracez](http://127.0.0.1:7777/debug/tracez)
-
-## /rpcz
-
-Rpcz page is available at [/rpcz](http://127.0.0.1:7777/debug/rpcz).
-This page serves stats about sent and received RPCs.
-
-Available stats:
-
-* Number of RPCs made per minute, hour and in total.
-* Average latency in the last minute, hour and since the process started.
-* RPCs per second in the last minute, hour and since the process started.
-* Input payload in KB/s in the last minute, hour and since the process started.
-* Output payload in KB/s in the last minute, hour and since the process started.
-* Number of RPC errors in the last minute, hour and in total.
-
-## /tracez
-
-Tracez page is available at [/tracez](http://127.0.0.1:7777/debug/tracez).
-This page serves details about the trace spans collected in the process.
-It provides several sample spans per latency bucket and sample errored spans.
-
-An example screenshot from this page is below:
-
-
diff --git a/layouts/.gitkeep b/layouts/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/layouts/codelabs/googlecloudstorage.html b/layouts/codelabs/googlecloudstorage.html
new file mode 100644
index 00000000..6f536fe1
--- /dev/null
+++ b/layouts/codelabs/googlecloudstorage.html
@@ -0,0 +1,90 @@
+
+
+
+
+
+
+
+
+ Setup and Configure Google Cloud Storage
+
+
+
+
+
+
+
+
+
+
+
+
This tutorial shows you how to setup and configure Google Cloud Storage
+
Requirements:
+
+
A Google Cloud Platform project
+
+
+
+
+
+
+
If you haven't already created a project on Google Cloud, you can do so here.
This tutorial shows you how to setup and configure Jaeger
+
+
Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including:
+
Distributed context propagation Distributed transaction monitoring Root cause analysis Service dependency analysis Performance / latency optimization
This tutorial shows you how to setup and configure Prometheus
+
+
Prometheus is a monitoring system that collects metrics from systems, by scraping exposed endpoints at a regular interval. It evaluates rule expressions and displays results. It can also trigger alerts if alert conditions are met.
+
Requirements:
+
+
An installation of Prometheus which you can get from here Install Prometheus
+
+
+
+
+
+
+
Prometheus Monitoring requires a system configuration usually in the form a ".yaml" file. For example, here is a sample "prometheus.yaml" file to scrape from our servers running at localhost:9888, localhost:9988 and localhost:9989