You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(optional) already reported 3rd party upstream repository or mailing list if you use k8s addon or helm charts.
Steps to replicate
He had a case that the max open shards had been reached in Openeasrch so fluentd was getting an error
Error:
[warn]: #0 send an error event to @ERROR: error_class=Fluent::Plugin::OpenSearchErrorHandler::OpenSearchError error="400 - Rejected by OpenSearch [error type]: illegal_argument_exception [reason]: 'Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [2999]/[3000] maximum shards open;'"
The error is ok and expected, but we did not expect to lose the data.
Once we increased the maximum number of open shards in opensearch the old logs were never being pushed.
Looks like the fluentd is not retrying on error 400 and dropping data.
We want to not lose the data due to some temporary misconfiguration on the Opensearch or if there is some limit being reached.
We expected the data to be stored in the buffer and retry till it was successful and not lose the data. How to achieve that when getting similar errors?
Using Fluentd and OpenSearch plugin versions
Ubuntu
Kubernetes
Fluentd
fluentd 1.16.2
OpenSearch plugin version
fluent-plugin-opensearch (1.1.4)
opensearch-ruby (3.0.1)
OpenSearch version
v 2.10.0
The text was updated successfully, but these errors were encountered:
(check apply)
Steps to replicate
He had a case that the max open shards had been reached in Openeasrch so fluentd was getting an error
Error:
The error is ok and expected, but we did not expect to lose the data.
Once we increased the maximum number of open shards in opensearch the old logs were never being pushed.
Looks like the fluentd is not retrying on error 400 and dropping data.
We want to not lose the data due to some temporary misconfiguration on the Opensearch or if there is some limit being reached.
Configuration
Expected Behavior or What you need to ask
We expected the data to be stored in the buffer and retry till it was successful and not lose the data. How to achieve that when getting similar errors?
Using Fluentd and OpenSearch plugin versions
Ubuntu
Kubernetes
Fluentd
fluentd 1.16.2
OpenSearch plugin version
fluent-plugin-opensearch (1.1.4)
opensearch-ruby (3.0.1)
OpenSearch version
v 2.10.0
The text was updated successfully, but these errors were encountered: