Check the configuration and table size when reclaiming keys.#19
Check the configuration and table size when reclaiming keys.#19
Conversation
bf661e2 to
8ab2c19
Compare
| @doc false | ||
| def remove_oldest(cache, count) do | ||
| ms = [{{:_, :_, :"$1", :_}, [], [:"$1"]}] | ||
| ms = [{{:"$1", :_, :"$2", :_}, [], [{{:"$2", :"$1"}}]}] |
There was a problem hiding this comment.
Overall the PR looks good, can you talk a little about the changes to this function. At first-blush they look effectively equivalent to me, with the exception that I'd assume the former implementation is/could be optimized under the hood when doing that range delete.
Is there some subtlety here I'm missing wherein the updated implementation here is more correct?
There was a problem hiding this comment.
The previous version only looked at the timestamp and then deleted everything less than that timestamp. But that approach will remove more keys than expected if lots of keys were inserted during the same time stamp. This approach pulls back both the key and the timestamp so that we can sort by both and remove a set number of keys. We could make this even more efficient by using an ordered set but...I decided not to do that. I'd want to take better measurements of these functions to make sure that any additional optimization was needed and atm. I just wanted to resolve the bug.
This should help resolve an issue where multiple reclamations could remove all of the keys in a cache. I believe @jeffutter pointed this out years ago and it took me a while to get around to thinking about it. This is what burnout does to a person.