Sometimes I need to do this:
unless enough_widgets?
make_more_widgets
end
Which is all well and good, until I start letting two or more of these codes run in parallel. If you’ve never thought about this before, what can happen is something nasty called a race condition, where two or more processes (or threads) simultaneously check #enough_widgets?, and simultaneously both decide that they need to go and #make_more_widgets. With multiple processes now making more widgets, we end up with too many.
The solution is to lock this critical section of code so that only one process could ever run it at once – everyone else has to queue up and wait their turn. That way each check for #enough_widgets? will return an answer that’s accurate. In a single process with threads, this is achieved using the Mutex class, but when you run multiple processes in parallel, across multiple machines, you need something more. You need MegaMutex.
How
Suppose you have a WidgetMaker:
class WidgetMaker
include MegaMutex
def ensure_just_enough_things
with_distributed_mutex("WidgetMaker Mutex ID") do
unless enough_widgets?
make_more_widgets
end
end
end
end
Now, thanks to the magic of MegaMutex, you can be sure that all processes trying to run this code will wait their turn, so each one will have the chance to make exactly the right number of widgets without anyone else poking their nose in.
Configuration
MegaMutex uses memcache-client to store the mutex, so your infrastructure must be set up to use memcache servers.
By default, MegaMutex will attempt to connect to a memcache on the local machine, but you can configure any number of servers like so:
MegaMutex.configure do |config|
config.memcache_servers = ['mc1', 'mc2']
end
Installation
sudo gem install mega_mutex
Nice, I did the same thing for cross-server coordination, shared mutexes using memcache 🙂
I’m curious why you ended up using memcached for a lock instead of using a database lock column? Were you using a DB that doesn’t have locking?