-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GH-261] Handle revoked token #308
Conversation
Hello @MatthewDorner, Thanks for your pull request! A Core Committer will review your pull request soon. For code contributions, you can learn more about the review process here. |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #308 +/- ##
==========================================
- Coverage 32.20% 31.03% -1.18%
==========================================
Files 21 21
Lines 3465 3757 +292
==========================================
+ Hits 1116 1166 +50
- Misses 2242 2471 +229
- Partials 107 120 +13
☔ View full report in Codecov by Sentry. |
This PR has been automatically labelled "stale" because it hasn't had recent activity. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work on this @MatthewDorner, thanks for implementing this solution 👍
The PR mostly looks good. I have one request to make a common function handleGitlabError
to avoid some code duplication
server/api.go
Outdated
if strings.Contains(err.Error(), invalidTokenError) { | ||
p.handleRevokedToken(c.GitlabInfo) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of duplicating this block across the different functions, I'm thinking we should have a function handleGitlabError(err error)
, which has the above logic. What do you think @MatthewDorner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think that would be better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the function would need to take a *gitlab.UserInfo
as well. Maybe the handleGitlabError
should be combined with my handleRevokedToken
as a single function such as checkAndHandleRevokedToken(err error, info *gitlab.UserInfo)
?
server/command.go
Outdated
return "Requested resource is private." | ||
case strings.Contains(err.Error(), invalidTokenError): | ||
p.handleRevokedToken(info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code cleanup is good here, though I think for the purpose of this PR, can we just add a call to handleGitlabError(err)
at the top of the if err != nil
block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I'll do that.
What is the side effect of "not handling the case where the token is both expired and revoked"? Note that the plugin's API is called multiple times when the frontend requests data to fill the left hand side GitLab metrics, so it will always be tried more than once during this time. You can refresh your browser to trigger this data refresh. There may be the case where we end up sending more than one bot DM here, due to more than one request trying to use the access token before it is removed from the kv store. If the server is running in a high availability environment, there may be some lag on the database read for rejecting the token, which could cause this scenario. I'm thinking that sending multiple DMs is an ok outcome, as it's relatively harmless, and any defense around this is going to be complicated and potentially error-prone. Also, we will want to make sure the changes in this PR work with the changes from #298
What you describe as the current behavior of this PR seems good to me 👍
I would test the |
Each API call will get to |
Yeah, I waited for that PR to be merged and this PR is based on code that includes those changes. But the issue with the "revoked and expired" case is how to make this work with the
There is probably a workaround I'm just not seeing? |
@MatthewDorner Is there any risk with this situation? If it's revoked & expired, and this happens, is the kvstore then in an invalid state? It seems like a non-issue to me if we invalidate the token on our side so we don't try to use it again.
"This case is not handled" There's something I'm missing here. I don't understand how the case is not handled. I understand that the token won't work in this case. Are we getting the "invalid token" error from GitLab in this case? Shouldn't we just delete it from our database and tell the user about it like usual? Is there something stopping us from doing this? Can we extract some code from the function causing the infinite loop to call in another context? Also, please mention me in replies here so I get a notification for the mention 👍 |
@mickmister In
From my perspective, I feel like the issue is that I can think about it more, just wanted to see if something obvious occurred. |
@mickmister We never get to the actual API call, as the call to |
@MatthewDorner Yeah it would seem to me that |
@mickmister yes, that is one of the solutions that occurred to me. Basically, check for both conditions (expired and revoked) in the same place. I didn't want to go that route immediately since it would be a larger scope of changes than I did so far, so I will work on it a bit and update. Thanks for the input. |
@mickmister Think it would work to use a wrapper or decorator around the GitLab client calls? Otherwise, every method that makes GitLab calls must call A wrapper function should be cleaner and use less code, but I assume it would hit the KV store for every GitLab call rather than only once per method that makes GitLab calls, but that may be negligible. And may be difficult in other ways I don't foresee. |
@MatthewDorner Could we have a wrapper function around the http handler functions? mattermost-plugin-gitlab/server/api.go Line 52 in c5d107e
We can rename |
I'm not sure how this wrapper function would work, since each of the Gitlab client's methods return different types. To me, If the solution in my previous comment won't work, I'm thinking we should just go with the strategy designed in your first paragraph. Explicit error handling for each call to the external library Gitlab client makes sense to me. |
@mickmister unless you have a strong preference, I think I like locating the GitLab token error handling closer to the GitLab client calls, and to ensure it will be called in more circumstances such as slash commands, even if that isn't strictly necessary due to the frontend constantly making those requests. Also I already started working on that version haha. |
@mickmister yes, this will be the case when the revoked token is detected due to the 4 API requests from the frontend. It also spams the error log a bit with messages such as |
… handle invalid token during token refresh
@mickmister Converting to draft because still having issues. The issue is related to the conversation #298 (comment) and case where the 4 API requests happen simultaneously, as well as the previous conversation in this PR about token being both expired and revoked. While it was OK to have multiple token refreshes in the PR linked above, it is not OK once we are checking for and disconnecting the user based on an invalid token. What happens is one of the 4 requests will refresh the token, and the old token is now invalid. Then the 3 other goroutines try to refresh their (now invalid) tokens and (with the changes in this PR) the user gets disconnected. I'm not sure how the exact timing or concurrency works out, but this condition happens every time for me when I set my token to expire and then refresh the browser. This is why it's a problem if I suspect this is also the reason for a user in the community server complaining of their error logs being spammed even after the token refresh fix, which resulted in this issue #313. The first request would refresh the token and the other 3 would fail and write to the error log, every 2 hours. It just wouldn't break any functionality, but if we also disconnect the user every time we detect an invalid token it becomes a bigger issue, and maybe some concurrency control is required now. |
The issue is made more difficult by the fact that GitLab's OAuth implementation does not allow a https://docs.gitlab.com/ee/api/oauth2.html The Golang Relevant thread here, although it's confusing because they're talking about a few different issues / cases at the same time: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested and passed as per testing above
- Spamming the logs when trying to fetch data from the API stops on upgrade as the plugin does not treat the affected users as connected
- Affected users get the DM notice to reconnect
- Users with a functional authentication continue to us the plugin normally
- After 2 days of monitoring I see nothing unusual in the logs after the user has re-connected
- Regression tested Chimera proxy
- Regression tested authentication for browser and desktop
Created a separate issue to triage and address for multiple DMs #371
LGTM!
Huge thanks @MatthewDorner for the continued efforts on this work.
@srkgupta Do you mind giving this another review? I resolved some merge conflicts regarding the @DHaussermann Are you able to give this another smoke test? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested and passed
Sanity checked the functionality now that @mickmister has resolved merge conflicts.
- Requests for GitLab dada stop when token is known to be invalid
- DMs are receive
- Reconnect works and user remains conneted
LGTM!
Looks like we need @hanzei's approval before this can be merged:
|
@hanzei Can you take another look at this when you have the chance? The merge is currently blocked by the requested changes |
Summary
If a GitLab API call fails with
401 {error: invalid_token}
, disconnect the user and DM the user a message telling how to reconnect their account.Ticket Link
Fixes #261
Questions:
I have not handled the case where the token is both expired and revoked. Here,
p.checkAndRefreshToken
attempts to refresh the token and the call tosrc.Token
fails. However, if I handle the error and callp.disconnectGitlabAccount
, it will callp.checkAndRefreshToken
again, creating an infnite loop. How would you suggest to resolve this?In the case where the error occurs during the execution of a slash command, I have not changed the existing slash command error responses. I just send the DM in addition. So if you get the error during
/gitlab me
, you'll get the new message in the DM channel but the command response will still be "Encountered an error getting your GitLab profile.". Should I change the command response to a similar or identical message to what I'm sending in the DM?What test coverage is needed?