|
| 1 | +# Doorman Simulations |
| 2 | + |
| 3 | +The Simulation test is a simple test that sends a large number of requests to the server from different client instances trying to concurrently access the same resource, but configured with a different rate limit. |
| 4 | +The goal of this test is to ensure the rate limit and dynamic rate limit changes are working as expected based of additions/removals of clients. |
| 5 | + |
| 6 | +## Setup |
| 7 | +This is more of an end-to-end test, so you will need to have a few things setup before you can run the test and measure the results. |
| 8 | + |
| 9 | +### I. Setting up doorman. |
| 10 | + 1. Clone this fork [doorman](https://github.com/Pythonista7/doorman) repository. I've tried to keep the changes minimal and only added changes with the intentions of being able to run the project by satisfying the modern go requirements to build & run. |
| 11 | + 2. `cd go/cmd/doorman` and run `go build` to build the binary after which you should see a `doorman` binary in the same directory. |
| 12 | + 3. Run the binary with the following command , |
| 13 | + ```shell |
| 14 | + ./doorman -logtostderr -config=path-to-config.yml -port=15000 -debug_port=15050 -etcd_endpoints=http://localhost:2379 -master_election_lock=/doorman.master -master_delay=300s -hostname=localhost |
| 15 | + ``` |
| 16 | + > *Note*: |
| 17 | + > * You will need to have etcd running on your machine `brew install etcd` and run `etcd` on a new sh to start the etcd server. |
| 18 | + > * Also, you will need to have a config file, you can use the one in the `doc/simplecluster/config.yaml` directory or create your own. |
| 19 | + 4. You should see the doorman server running on `localhost:15000` and the debug server running on `localhost:15050`. |
| 20 | + 5. For the purposes of the test add the following to the `config.yaml` file which defines the resources which we will be testing against. (you will need to restart the doorman server for the changes to take effect) |
| 21 | + ```yaml |
| 22 | + - identifier_glob: tf |
| 23 | + capacity: 100 |
| 24 | + safe_capacity: 10 |
| 25 | + description: fair share example |
| 26 | + algorithm: |
| 27 | + kind: FAIR_SHARE |
| 28 | + lease_length: 60 |
| 29 | + refresh_interval: 30 |
| 30 | + ``` |
| 31 | + 6. You can try a basic sanity check by building and running the cli client in the `go/cmd/doorman_shell` directory. |
| 32 | +```shell |
| 33 | + go build |
| 34 | +./doorman_shell --server=localhost:15000 |
| 35 | +> get cli1 tf 100 |
| 36 | +> get cli2 tf 100 |
| 37 | +> get cli3 tf 30 |
| 38 | +> show |
| 39 | +client: "cli1" |
| 40 | +resource: "tf" |
| 41 | +capacity: 0 |
| 42 | +
|
| 43 | +client: "cli2" |
| 44 | +resource: "tf" |
| 45 | +capacity: 0 |
| 46 | +
|
| 47 | +client: "cli3" |
| 48 | +resource: "tf" |
| 49 | +capacity: 0 |
| 50 | +
|
| 51 | +# After the "refresh_interval" which should be 30 seconds as per the above config , if you try `show` again you should see this which indicates we are all good. |
| 52 | +> show |
| 53 | +client: "cli1" |
| 54 | +resource: "tf" |
| 55 | +capacity: 35 |
| 56 | + |
| 57 | +client: "cli2" |
| 58 | +resource: "tf" |
| 59 | +capacity: 35 |
| 60 | + |
| 61 | +client: "cli3" |
| 62 | +resource: "tf" |
| 63 | +capacity: 30 |
| 64 | + |
| 65 | +# Release the resources |
| 66 | +> release cli1 tf |
| 67 | +> release cli2 tf |
| 68 | +> release cli3 tf |
| 69 | +> show |
| 70 | +> |
| 71 | +# (should not return anything indicating resources are released from the cli-client, but the resources are still being held by the server until the lease expires i.e: 60sec in this config.) |
| 72 | +``` |
| 73 | + |
| 74 | +### II. Set up a target server to test against. |
| 75 | +Clone [this sitting-duck server](https://github.com/Pythonista7/Ktor-Sitting-Duck) and follow the simple instructions in the readme to get the server and some grafana running on http://localhost:3000 |
| 76 | + |
| 77 | + |
| 78 | +### III. Running this project |
| 79 | + 1. Clone this repository and gradle sync and build. |
| 80 | + 2. Run the test `simulationOne()` from the `src/test/kotlin/client/Simulation.kt` file. |
| 81 | + |
| 82 | +### IV. Observing the results |
| 83 | +1. You can observe the results in the grafana dashboard that you should have running on `http://localhost:3000` with the default credentials. |
| 84 | +2. You can also try to create additional clients from the `doorman_shell` cli client , try to access the same `tf` resource and observe the changes in the dashboard as the clients are added/removed and how that affects the actual through put of the other clients showcasing dynamic the rate limiting. |
| 85 | + |
| 86 | +# Results |
| 87 | + |
| 88 | + |
| 89 | +*Note* : I sometimes find that only one client is actively sending requests or only the other 2 clients are sending requests, I'm not sure why this is happening but I suspect it has to do with the way the simulation test coroutines are being launched/scheduled. I will need to investigate this further. |
| 90 | +But the individual tests for the rate limiter [RateLimiterTest.kt](RateLimiterTest.kt) seem to be working as expected and should ideally be the same for the simulation test as well. |
0 commit comments