📢 This external scaler only supports KEDA v1.x and is no longer maintained. If you want to scale applications using Durable Functions, we recommend using the SQL backend with our MS SQL scaler.
KEDA Durable Functions Scaler is an extension that enables autoscaling of Durable Functions deployed on Kubernetes cluster. This extension uses External Scaler Support for KEDA.
The key features of KEDA Durable Functions Scaler are:
- Intelligent Auto Scaling
- One-liner deployment using Helm
KEDA supports multiple scalers. As a part of the scalers, this project support Durable Functions Scaler for KEDA. You can deploy Durable Functions with auto scale feature on Kubernetes.
KEDA Durable Functions Scaler works as a gRPC server of the External Scaler Support.
- gRPC services with ASP.NET Core
- Watch Control/Worker queues by DurableTask and use Scale Recommendation
- Get the current worker count from Kubernetes API Server
Currently, KEDA Durable Scaler can't make functions scale down to zero. The minimum pod number is one. Durable Scaler needs to send data to the control/worker queue. To achieve this behavior, we need to separate the HTTP and non-HTTP deployments. However, the feature seems not working. We need to wait until this issue is fixed.
- Add configuration for enabling only HTTP or only non-HTTP functions #4412
- Pods doesn't scale in to zero #17
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.