Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add range response KV length as a metric #16881

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions server/etcdserver/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,12 @@ var (
Name: "lease_expired_total",
Help: "The total number of expired leases.",
})
rangeResponseKvCount = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: "etcd",
Subsystem: "server",
Name: "range_response_kv_count",
Help: "The number of KVs returned by range calls.",
}, []string{"range_begin"})

currentVersion = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "etcd",
Expand Down Expand Up @@ -168,6 +174,7 @@ func init() {
prometheus.MustRegister(slowReadIndex)
prometheus.MustRegister(readIndexFailed)
prometheus.MustRegister(leaseExpired)
prometheus.MustRegister(rangeResponseKvCount)
prometheus.MustRegister(currentVersion)
prometheus.MustRegister(currentGoVersion)
prometheus.MustRegister(serverID)
Expand Down
3 changes: 3 additions & 0 deletions server/etcdserver/v3_server.go
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,9 @@ func (s *EtcdServer) Range(ctx context.Context, r *pb.RangeRequest) (*pb.RangeRe
err = serr
return nil, err
}

rangeResponseKvCount.WithLabelValues(string(r.Key)).Observe(float64(len(resp.Kvs)))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the key label could cause a very large /metrics response, we might want to only emit this on n > 1000 or so. Also thought about adding this to the expensive trace request above.

Let me know what you guys think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @tjungblu - Thanks for raising this idea. With the etcd project making tentative steps towards adopting the kubernetes enhancements process for new features or meaningful decisions this addition could be a potential item to first have a kep for sig/etcd.

Just an idea if you wanted a larger pool of eyes on the proposal. If you would rather just proceed here personally I am ok with it.

Is there any way you can provide some qualification of what impact this new metric would have on the response size? 🤔


return resp, err
}

Expand Down