-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Falcosidekick crashes with concurrent map iteration and map write error #746
Comments
I will try to reproduce the issue, can you paste your settings for falcosidekick please (obfuscate the all sensitive data of course). Thanks |
I did tests with a vanilla config.yaml and just Slack as enabled output (I used a mock server to avoid Slack's rate limiting) and I wasn't able to replicate the issue with the 2.29.0 and ~100 req/s (which is a ridiculous rate for security alerts in real life). |
Hello again Thomas. Let me know what more I can inform you of. I will say that looking at our elasticsearch I see this behavior of several minutes of 5-10 logs, then a few hundred thousand logs all at once. This makes no sense to me, as the only alerting rules I have apply to a cluster of 30 some nodes, and it is just the k8s audit rules and our own custom rule for ssh intrusion detection. There should not be this insane volume. What can I help provide to help reproduce the error. |
Do you have more details about which rule is triggered? |
Hello Thomas, I saw your slack message, and the other bug report. The alerts I am seeing are: Critical Executing binary not part of base image (proc_exe=/usr/bin/c_rehash proc_sname=sh gparent=ca-certificates ....) So they all appear to be triggers for the rule: Recently I have tried redeploying my instances of falco with the rules list including our own custom rules and the k8s audit rules only. This has "fixed" the issue by removing the rules with issues and reducing our count from ~120,000/5 min to ~1000/hr which is much more tolerable, but removes the majority of the default falco monitoring for us. |
Your rates are really huge, it's noisy for sure. Falco is a security agent, you have to fine tune the rules to get compliant with your env. It's not supposed to fire so much alerts. I would like to reproduce the bug anyway. I tried with a highly restrained container, to know if it can be related to a lack of resources leading to a race condition. No success. |
I have reenabled falco_rules, and then disabled the following rules in my config:
But I am still observing the same failure and restart of falco sidekick. Additionally I see that 2 hosts in my cluster seem to not be observing the fact that the rules are disabled. They are still alerting, but no other of the 40 some hosts in this cluster are. As far as getting you logs I'm not even sure where to start. Let me know what would be useful. |
Are you using Helm? If so, the
|
Describe the bug
Falcosidekick is encountering frequent crashes with a
fatal error: concurrent map iteration and map write
. After updating Falcosidekick to run with a multi-replica configuration (replicaCount=3
), enabling buffered output (falco.buffered_outputs=true
), and modifying rate and burst limits (rate: 10
,burst: 20
), failures are still observedInitially, the WebUI output was enabled, which caused instability. After disabling the WebUI output, there was some improvement in stability, but the application continues to crash when handling events.
Errors logged during operation:
How to reproduce it
replicaCount=3
falco.buffered_outputs=true
rate: 10
burst: 20
Expected behavior
Falcosidekick should remain stable and handle a high volume of Falco events in a multi-replica setup without crashing.
Environment
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
VERSION="11 (bullseye)"
t3.large
instancesAdditional context
Despite disabling the WebUI (which initially caused instability), the system continues to crash when forwarding events to Opensearch (using elasticsearch output) and Slack outputs. Buffered outputs have been enabled to optimize performance, but the issue persists across all replicas.
Stack traces and further details can be found in the log attached and in the original conversation with Thomas Labarussias:
https://kubernetes.slack.com/archives/CMWH3EH32/p1726712928741829
falcologs1.txt
The text was updated successfully, but these errors were encountered: