-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nsqlookupd: fix write lock starvation #1208
base: master
Are you sure you want to change the base?
nsqlookupd: fix write lock starvation #1208
Conversation
nsqlookupd/registration_db.go
Outdated
return val.(Producers) | ||
} | ||
|
||
r.cachedMutex.RUnlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/patrickmn/go-cache#go-cache says "the cache can be safely used by multiple goroutines" so I don't think you need this lock at all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mainly used to avoid multiple set operations happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, but you still don't need the read lock for that, only the write lock (which could be a plain mutex) for the following section, where you check again if the result is in the cache, before running the expensive loop.
That expensive loop, which is the main thing you are trying to avoid, is:
for k, producers := range r.registrationMap {
if !k.IsMatch(category, key, subkey) {
continue
}
for _, producer := range producers {
I think that there is probably a better way to structure the registration map, or maintain pre-computed results along side it or embedded in it, which are updated or invalidated whenever the registration map is updated, to get better performance with huge numbers of producers / topics / channels / whatever.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the sync.Map
doc:
(1) when the entry for a given key is only ever written once but read many times, as in caches that only grow, or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys.
the item one said that sync.Map
suites for a situation where only one written happens which is definitely not the truth for this nsqlookupd registeration map usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but you still don't need the read lock for that, only the write lock (which could be a plain mutex) for the following section, where you check again if the result is in the cache, before running the expensive loop.
@ploxiln Thanks, i understand. Will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Despite being less than optimal for this use-case, a sync.Map
can have entries inserted and updated concurrently while a query is iterating over the entire structure. So it might resolve the initial complaint about the write lock being starved.
I consider the addition of this cache to be a hacky work-around. It adds up to a 1 minute delay to updates, for all users. It may be OK for many consumers, but probably not all, and not for nsqd or nsqadmin.
nsqlookupd is already just a big in-memory data structure. It could probably be re-structured to process your extreme case faster. That would admittedly require more time and effort to understand the current structure and queries. So a cheap quick fix, without those problems, might be a sync.Map.
7aa646f
to
0ab3fe3
Compare
0ab3fe3
to
1a3e5a8
Compare
nsqlookupd/registration_db.go
Outdated
removed := false | ||
if _, exists := producers[id]; exists { | ||
removed = true | ||
} | ||
|
||
// Note: this leaves keys in the DB even if they have empty lists | ||
delete(producers, id) | ||
|
||
r.registrationMap.Store(k, producers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need to store the producers
map again after modifying it, because map
is a reference type.
I think this is true for AddProducer()
too (registrationMap.LoadOrStore()
ensured that the producers
map returned for you to operate on is the one already in the registrationMap
).
commit title typo: "startvation" -> "starvation" |
I thought of another problem, unfortunately. The values, which are plain maps, are not safe for concurrent updates and reads. By the way, in a few places you do So anyway, while concurrently updating and iterating over the
Another idea is to make every Both of these ideas sound like they will add a significant amount of overhead, more than just one top-level If func (r *RegistrationDB) needFilter(key string, subkey string) bool {
return key == "*" || subkey == "*"
} channels := s.ctx.nsqlookupd.DB.FindRegistrations("channel", topicName, "*").SubKeys() topics := s.ctx.nsqlookupd.DB.FindRegistrations("topic", "*", "").Keys() So if we could come up with an efficient way to store information needed to serve these queries, and keep it consistent as registrations are added and removed, we could avoid ever needing to do the expensive looping ... (The other thing I might worry about is Accelerating each one of these queries is sort of like adding another index. It is more complicated to keep all these indexes consistent as registrations are added/removed but it may be worth it so that all these queries are fast at your scale, and still accurate. |
That's quite true. I have encountered this while debugging this PR code. That's why i add a lock before
I use |
1a3e5a8
to
b3847f1
Compare
r.registrationMap.Range(func(k, v interface{}) bool { | ||
if k.(Registration).IsMatch(category, key, subkey) { | ||
producers := v.(ProducerMap) | ||
for _, producer := range producers { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to r.RLock()
around this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you take the r.RLock()
around the whole loop, that puts this back close to where it started. If you can take the RLock()
only after matching a Registration and before iterating over the ProducerMap
, and then releasing it after finishing with that ProducerMap
, it could avoid starving the write lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Actually after finally finished this PR, i have thought about this. The PR now seems similar to the original one. But, some write lock has been erased.
If you can take the RLock() only after matching a Registration and before iterating over the ProducerMap, and then releasing it after finishing with that ProducerMap, it could avoid starving the write lock.
Not quite sure about this. Since read lock can be acquired simultaneously and acquiring and releasing read lock frequently may utilize more CPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah ... that's why I said this strategy is "sounding more iffy"
as I see it, it's either go back to the original, or take the read lock just around accessing each ProducerMap ("thrashing" the lock I guess), or turn all these ProducerMap into sync.Map themselves
results := Registrations{} | ||
for k, producers := range r.registrationMap { | ||
r.registrationMap.Range(func(k, v interface{}) bool { | ||
producers := v.(ProducerMap) | ||
if _, exists := producers[id]; exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you also need the r.RLock()
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@ploxiln Actually i have test this PR in our clusters for about 1 day. Every thing is fine. The worry about concurrent read/write access for the value of |
b3847f1
to
071a571
Compare
55de623
to
383fba1
Compare
you pushed the wrong branch over this one :) |
383fba1
to
55de623
Compare
@ploxiln Done. |
Where are we at on this one? Seems like the conversation has stalled at having to take a lock around a large critical section when iterating over First, it doesn't look like we ever actually call nsq/nsqlookupd/registration_db.go Lines 138 to 155 in 55de623
Then, for |
see also #584, where some of that code referenced above was added. |
Or if we want to get fancy, something like https://github.com/derekparker/trie is probably more appropriate. We'd have to change how we generate keys and choose a separator (e.g. |
This PR will fix register db write lock starvation.
We have encountered a circumstance:
And, finally, the register db is large.
So the lookup api will consume some little more time to respond. In the case where a lookup api get the read lock fo register db, all the write lock, for example, register/unregister channels, will be blocked. And since the read lock can be acquired simultaneously by lookup requests, the operations that needs write lock will all be blocked and the memory and goroutine will accumulated.
This PR will add a cache for the FindProducers resposne. TTL is 1 minute, the purge operation will be operated at 5 minutes interval.
Corns: