Skip to content

[MM-26385] Include cache and rate limiter to GH client#155

Merged
ethervoid merged 14 commits intomasterfrom
MM-26385
Jul 8, 2020
Merged

[MM-26385] Include cache and rate limiter to GH client#155
ethervoid merged 14 commits intomasterfrom
MM-26385

Conversation

@ethervoid
Copy link
Contributor

Summary

The Github client now is created with two layers on top of it:

  • The first one is a private LRU cache, using httpcache, that will
    check for stored requests/responses and will serve those if needed
  • The second layer is a rate limiter, using the rate package, that will
    allow performing, with the default configuration, 10 requests per
    second

Ticket Link

https://mattermost.atlassian.net/browse/MM-26385

The Github client now is created with two layers on top of it:

- The first one is a private LRU cache, using httpcache, that will
  check for stored requests/responses and will serve those if needed
- The second layer is a rate limiter, using the rate package, that will
  allow to perform, with the default configuration, 10 requests per
  second
@ethervoid ethervoid added the Work In Progress Not yet ready for review label Jul 6, 2020
github.com/gorilla/mux v1.7.4
github.com/heroku/docker-registry-client v0.0.0-20190909225348-afc9e1acc3d5
github.com/jarcoal/httpmock v1.0.5
github.com/m4ns0ur/httpcache v0.0.0-20200426190423-1040e2e8823f
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need this fork/fix until is merged in master

@ethervoid ethervoid added 2: Dev Review Requires review by a core committer and removed Work In Progress Not yet ready for review labels Jul 7, 2020
@ethervoid ethervoid changed the title Include cache and rate limiter to GH client [MM-26385] Include cache and rate limiter to GH client Jul 7, 2020
Copy link
Contributor

@agnivade agnivade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing piece of work! Just some small comments to take care of.

Copy link
Contributor

@metanerd metanerd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Smart testing and awesome work! Thank you so much @ethervoid ! This will resolve a ton of issues we are having currently! I added some minor suggestions/nits which you can also just ignore/dismiss.

body := &github.Reference{Object: &github.GitObject{}}
err := json.Unmarshal([]byte(testGetRefReturn), &body)
require.NoError(t, err)
resp, err := httpmock.NewJsonResponse(200, body)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My IDE says resp could be nil.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there are 2 needed responses in this test series, IMO it would improve readability by naming them resp1 and resp2 to make the scheme more obvious. Or put in the name which kind of response it should be. I know golang wants short variable names, but personally, I find it more readable that way, instead of having to read the comment. This is just a suggestion/nit. I leave it up to you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these tests be run in parallel?

Copy link
Contributor Author

@ethervoid ethervoid Jul 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, resp is not possible to be nil because we're providing the elements, so no need to check for nil here.

Changing literal 200 for http.StatusOK is a great suggestion and I've included it. Thank you!

Can these tests be run in parallel?

Could be done but given the time spent by these tests, we don't need now to execute them in parallel

body := &github.Reference{Object: &github.GitObject{}}
err := json.Unmarshal([]byte(testGetRefReturn), &body)
require.NoError(t, err)
resp, err := httpmock.NewJsonResponse(200, body)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a convention? http.StatusOK or 200? So that I know for my future code.

@ethervoid ethervoid requested review from agnivade and metanerd July 8, 2020 12:28
Copy link
Contributor

@metanerd metanerd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect! ❤️

Copy link
Contributor

@agnivade agnivade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. So demo next week? ;)

@ethervoid
Copy link
Contributor Author

@agnivade hahaha yeah sure :D

Copy link
Member

@isacikgoz isacikgoz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@isacikgoz isacikgoz added 3: Reviews Complete All reviewers have approved the pull request and removed 2: Dev Review Requires review by a core committer labels Jul 8, 2020
@ethervoid ethervoid merged commit 376a7de into master Jul 8, 2020
@ethervoid ethervoid deleted the MM-26385 branch July 8, 2020 14:58
@agnivade
Copy link
Contributor

agnivade commented Jul 8, 2020

@ethervoid - Something caught my eye. Any reason the default is set to 10? From what I seeing here https://github.com/google/go-github#rate-limiting, it should be around 1.3. Perhaps then we need to split out the rate with the burst and have 2 separate flags? The word TokenReserve isn't really immediately apparent that it means both.

@metanerd - Just heads up that the GitHubTokenReserve field now denotes both the rate limit and token burst. So the config needs to be updated properly. Perhaps hold off deploying until we have the previous discussion sorted out.

@ethervoid
Copy link
Contributor Author

@agnivade Yes but is being used to provided limit and burst token at the same time. With 10 you get 10 requests/second and 10 burst tokens. We shouldn't limit to the 5000 requests because our rate consumption is not linear so we will be dropping or making requests wait for nothing. And also, taking into account we have the cache as the first layer we shouldn't be reaching the limit.

In summary, I rather prefer to have a broader limit and see what happens that put a very tight one

@agnivade
Copy link
Contributor

agnivade commented Jul 8, 2020

Ah that makes sense. So the current prod setting is 50. Do you suggest it to be 10 then?

@ethervoid
Copy link
Contributor Author

@agnivade yes. 10 should be more than enough to work perfectly fine without making a bottleneck

@agnivade
Copy link
Contributor

agnivade commented Jul 8, 2020

@metanerd
Copy link
Contributor

metanerd commented Jul 8, 2020

@agnivade done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

3: Reviews Complete All reviewers have approved the pull request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants