You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which chart in which version:
prometheus-thanos 4.9.3
What happened:
The thanos compactor can shutdown without exiting. The ready and healthy states change but the process does not exit. Because there is no liveness probe the unhealthy state is not detected and the pod is not restarted.
Logs:
level=warn ts=2022-05-17T12:08:22.350952394Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="BaseFetcher: iter bucket: context deadline exceeded"
level=info ts=2022-05-17T12:08:22.350965294Z caller=http.go:74 service=http/server component=compact msg="internal server is shutting down" err="BaseFetcher: iter bucket: context deadline exceeded"
level=info ts=2022-05-17T12:08:22.352757502Z caller=http.go:93 service=http/server component=compact msg="internal server is shutdown gracefully" err="BaseFetcher: iter bucket: context deadline exceeded"
level=info ts=2022-05-17T12:08:22.352802302Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason="BaseFetcher: iter bucket: context deadline exceeded"
level=error ts=2022-05-17T12:08:22.840731904Z caller=compact.go:480 msg="critical error detected; halting" err="compaction: group 0@10346066409509485645: compact blocks [data/compact/0@10346066409509485645/01EZE4EQFSY10D4BD48CH48ZFZ data/compact/0@10346066409509485645/01EZEBAEQTX88ADDJANYM36YMV data/compact/0@10346066409509485645/01EZEJ65ZQ0MBE1TFPN3MYDQ4A data/compact/0@10346066409509485645/01EZES1X7R80XQ4XBK1GK4XT5H]: 2 errors: populate block: add series: context canceled; context canceled"
The /-/ready and /-/healthy endpoints were added back in 2019. The corresponding readiness and liveness probes are missing in the chart. (As noted in the issue, the readiness probe is not really needed since compactor is not serving any requests, but the liveness probe should be there.)
What you expected to happen:
The unhealthy state should be detected and the compactor pod restarted in case of error.
How to reproduce it (as minimally and precisely as possible):
In our case, the compactor internal HTTP server can apparently timeout and give up after a number of retries. When this happens the process goes idle but does not exit. We are using Azure storage and have configured all timeouts to 60s with 5 retries.
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
Is this a request for help?: no
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Which chart in which version:
prometheus-thanos 4.9.3
What happened:
The thanos compactor can shutdown without exiting. The ready and healthy states change but the process does not exit. Because there is no liveness probe the unhealthy state is not detected and the pod is not restarted.
Logs:
The
/-/ready
and/-/healthy
endpoints were added back in 2019. The corresponding readiness and liveness probes are missing in the chart. (As noted in the issue, the readiness probe is not really needed since compactor is not serving any requests, but the liveness probe should be there.)What you expected to happen:
The unhealthy state should be detected and the compactor pod restarted in case of error.
How to reproduce it (as minimally and precisely as possible):
In our case, the compactor internal HTTP server can apparently timeout and give up after a number of retries. When this happens the process goes idle but does not exit. We are using Azure storage and have configured all timeouts to 60s with 5 retries.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: