Skip to content

[BUG] TOO_MANY_REQUESTS error craches the tasks with a unrecoverable exceptions without retries #739

@yeikel

Description

@yeikel

With a configuration such as

"max.retries": "10",
"retry.backoff.ms": "500"

I would expect the connector to continue retrying up to 6-8 minutes given the number of retries configured and the backoff configuration

What I am seeing however is that the connector is failing immediately with an unrecoverable exception. I believe that this might be a bug

While looking for the errors logs and the io.confluent.connect.elasticsearch.RetryUtil package, I do not see any retries logged when this error happens

If it helps, I have a DLQ configured that works as expected for other failures. When this problem happens, the DLQ is not triggered and the task just fails with the unrecoverable exception

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception. at 
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618) at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334) at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235) at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204) at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:201) at 
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:256) at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at 
java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.connect.errors.ConnectException: 
Indexing record failed. at 
io.confluent.connect.elasticsearch.ElasticsearchClient.handleResponse(ElasticsearchClient.java:639) at 
io.confluent.connect.elasticsearch.ElasticsearchClient$1.afterBulk(ElasticsearchClient.java:428) at 
org.elasticsearch.action.bulk.BulkRequestHandler$1.onResponse(BulkRequestHandler.java:59) at 
org.elasticsearch.action.bulk.BulkRequestHandler$1.onResponse(BulkRequestHandler.java:56) at 
org.elasticsearch.action.ActionListener$RunAfterActionListener.onResponse(ActionListener.java:341) at 
org.elasticsearch.action.bulk.Retry$RetryHandler.finishHim(Retry.java:168) at 
org.elasticsearch.action.bulk.Retry$RetryHandler.onResponse(Retry.java:112) at 
org.elasticsearch.action.bulk.Retry$RetryHandler.onResponse(Retry.java:71) at 
io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:214) ... 5 more Caused 
by: java.lang.Throwable: Response status: 'TOO_MANY_REQUESTS', Index: 'index', Document Id: 'yyty'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions