Skip to content

Corebench Roadmap #1

@deckarep

Description

@deckarep

Corebench Roadmap 2018

Here's the roadmap for this tool, if anyone wants to take a crack at one of these please do the following:

Contributing

  1. Create a new issue with the title: Feature: xyz
  2. Ping @deckarep in the ticket with your intent to contribute
  3. Ask clarifying questions as needed around design approach
  4. Work on ticket, add your name to AUTHORS.md file
  5. Ensure build passes and ping @deckarep when finished for a merge

Immediate Needs

  • Implement a nice logger package, make it pretty with colors 🤗
  • Add the ability to choose instance sizes based on CPU count
  • Integrate Benchstat command as a flag, for doing consecutive runs to eliminate noise and delta compares using --stat flag.
  • Cleanup/speedup polling logic and reduce time to bench [ provisioning -> benchmarking ]
  • Allow the ability to use either ENV vars OR FLAGS...to make CLI easier.
  • Ability to go get all dependencies if they aren't vendored in using /vendor
  • Ability to specify Go versions as command-line flag
  • Cleanup: go vet, linting, etc
  • Implement terminal spinner so user's know work is happening while waiting.
  • Support regex for benchmarks to specify certain ones to run
  • Ability to tee benchmark data to a local file for later examination
  • Ability to capture: cpuprofile, blockprofile, memprofile, etc to local file for later examination
  • Ability to capture binary for usage with pprof tool
  • On panic, ensure proper termination of a cloud resources
  • Add at least another provider: Google Cloud (since they charge by the minute)
  • Add a --dryrun command, which indicates what corebench will do.
  • Stabilize the API (at least add one more provider)
  • Unit tests
  • Better SSH key handling and installation (so you can log in and inspect box)

Future Needs

  • Ability to generate comprehensive graph reports from benchmark data
  • Support default .corebench configuration of: region, droplet size, basic settings
  • Add more providers: AWS, ???
  • Ability to benchmark non-open source code via rsync
  • Ability to launch your own profiling container with your own OS, tooling, etc.

Way, way future

  • Ability to kick off parallel benchmarks...what's the use-case here?
  • Integration with Slack upon completion, or Email report when finished (fire and forget)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions