-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add status tracking with loading spinners, clean up logging #69
Add status tracking with loading spinners, clean up logging #69
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This gives shiny output like: $ kind create cluster --name=asdf
Creating cluster 'kind-asdf' ...
✓ Ensuring node image (kindest/node:v1.11.3@sha256:855562266d5f647b5770a91c76a0579ed6487eb5bf1dfe908705dad285525483) 🖼
✓ [kind-asdf-control-plane] Creating node container 📦
✓ [kind-asdf-control-plane] Fixing mounts 🗻
✓ [kind-asdf-control-plane] Starting systemd 🖥
✓ [kind-asdf-control-plane] Waiting for docker to be ready 🐋
✓ [kind-asdf-control-plane] Starting Kubernetes (this may take a minute) ☸
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="/Users/bentheelder/.kube/kind-config-asdf"
kubectl cluster-info |
trying to record this with asciinema is amusing https://asciinema.org/a/kNsHMXcXpRAj4z0lUqM8l4CRV |
3bbcaaa
to
a230026
Compare
nice 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome 😄
My main comment on this, is making sure it's still possible/easy to debug when failures do happen (or if I just like to see lots of log output 😄)
@@ -209,7 +238,7 @@ func tryUntil(until time.Time, try func() bool) bool { | |||
// LoadImages loads image tarballs stored on the node into docker on the node | |||
func (nh *nodeHandle) LoadImages() { | |||
// load images cached on the node into docker | |||
if err := nh.Run( | |||
if err := nh.RunQ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be awesome if we can make --loglevel=debug
also print all of these messages, for debug purposes.
From what I can tell, with this patch there will no longer be any way to debug failures.
Also, if a particular step fails can we make it print out the stdout/stderr of that failed command?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking maybe --loglevel=trace
should do this after we upgrade logrus, but debug actually sounds good.
Definitely we should print out failed command output. Especially for kubeadm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we really want is:
- when critical things fail, inform the user and tell them what they can do about it
- if the log level is cranked high enough, log even "silent" command output on exit
Lots of log output will still mostly work with this, you can set |
/lgtm |
…elm_chart_offline Autoscaler helm chart offline installation
--loglevel
flag for setting the logrus log levelTODO (followup):