Click here to Skip to main content
15,115,557 members
Articles / Programming Languages / Python
Article
Posted 11 Apr 2018

Tagged as

Stats

8.2K views
7 bookmarked

Python Concurrency Programming: 6 Python Synchronization Primitives You Should Know

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
11 Apr 2018CPOL12 min read
This is an excerpt from Learning Concurrency in Python, by Elliot Forbes. Published by Packt Publishing.

Race conditions are a troublesome and oft-cursed aspect of concurrent programming that plague hundreds, if not thousands, of programs around the world.

The standard definition of a race condition is as follows:

A race condition or race hazard is the behavior of an electronic, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events.

One of the major things we need to guard against when implementing concurrency in your applications is race conditions. These race conditions can cripple our applications, and cause bugs that are hard to debug and even harder to fix. In order to prevent these issues, we need to both understand how these race conditions occur and how we can guard against them using the synchronization primitives we'll be covering in this chapter.

Understanding synchronization and the basic primitives that are available to you is vital if you are to create thread-safe, high-performance programs in Python. Thankfully, we have numerous different synchronization primitives available to us in the threading Python module that can help us in a number of different concurrent situations.

In this section, I'll be giving you a brief overview of all of the synchronization primitives available to you as well as a few simple examples of how you can use these within your programs. By the end of it, you should be able to implement your own concurrent Python programs that can access resources in a thread-safe way.

The Join Method

When it comes to developing incredibly important enterprise systems, being able to dictate the execution order of some of our tasks is incredibly important. Thankfully, Python's thread object allow us to retain some form of control over this, as they come with a join method.

The join method, essentially, blocks the parent thread from progressing any further until that thread has confirmed that it has terminated. This could be either through naturally coming to an end, or whenever the thread throws an unhandled exception.

Locks

Locks are an essential mechanism when trying to access shared resources from multiple threads of execution. The best way to picture this is to imagine you have one bathroom and multiple flat mates--when you want to freshen up or take a shower, you would want to lock the door so that nobody else could use the bathroom at the same time.

A lock in Python is a synchronization primitive that allows us to essentially lock our bathroom door. It can be in either a "locked" or "unlocked" state, and we can only acquire a lock while it's in an "unlocked" state.

RLocks

Reentrant-locks, or RLocks as they are called, are synchronization primitives that work much like our standard lock primitive, but can be acquired by a thread multiple times if that thread already owns it.

For example, say, thread-1 acquires the RLock, so, for each time that thread-1 then acquires the lock, a counter within the RLock primitive is incremented by 1. If thread-2 tried to come along and acquire the RLock, then it would have to wait until the counter of the RLock drops to 0 before it could be acquired. Thread-2 would go into a blocking state until this 0 condition is met.

Why is this useful, though? Well, it can come in handy when you, for instance, want to have thread-safe access for a method within a class that accesses other class methods.

Condition

A condition is a synchronization primitive that waits on a signal from another thread. For example, this could be that another thread has finished execution, and that the current thread can proceed to perform some kind of calculation on the results.

Semaphores

In the first chapter, we touched upon the history of concurrency, and we talked a bit about Dijkstra. Dijkstra was the man that actually took this idea of semaphores from railway systems and translated them into something that we could use within our own complex concurrent systems.

Semaphores have an internal counter that is incremented and decremented whenever either an acquire or a release call is made. Upon initialization, this counter defaults to 1 unless otherwise set. The semaphore cannot be acquired if the counter will fall to a negative integer value.

Say we protected a block of code with a semaphore, and set the semaphore's value to 2. If one thread acquired the semaphore, then the semaphore's value would be decremented to 1. If another thread then tried to acquire the semaphore, the semaphore's value would decrement to 0. At this point, if yet another thread were to come along, the semaphore would deny its acquire request until such point as one of the original two threads called the release method, and the counter incremented to preceding 0.

Bounded Semaphores

Bounded semaphores are almost identical to normal semaphores. Except for the following:

A bounded semaphore checks to make sure its current value doesn't exceed its initial value. If it does, ValueError is raised. In most situations, semaphores are used to guard resources with limited capacity.

If the semaphore is released too many times, it's a sign of a bug. If a value is not given, the value defaults to 1.

These bounded semaphores could, typically, be found in web server or database implementations to guard against resource exhaustion in the event of too many people trying to connect at once, or trying to perform a specific action at once.

It's, generally, better practice to use a bounded semaphore as opposed to a normal semaphore. If we were to change the preceding code for our Semaphore example to use threading.BoundedSemaphore(4) and ran it again, we would see almost exactly the same behavior except that we've guarded our code against some very simple programmatic errors that otherwise would have remained uncaught.

Events

Events are very useful, but also a very simple form of communication between multiple threads running concurrently. With events, one thread would, typically, signal that an event has occurred while other threads are actively listening for this signal.

Events are, essentially, objects that feature an internal flag that is either true or false. Within our threads, we can continuously poll this event object to check what state it is in, and then choose to act in whatever manner we want when that flag changes state.

In the previous chapter, we talked about how there were no real mechanisms to kill threads natively in Python, and that's still true. However, we could utilize these event objects and have our threads run only so long as our event object remains unset. While this isn't as useful at the point where a SIGKILL signal is sent, it could, however, be useful in certain situations where you need to gracefully shut down, but where you can wait for a thread to finish what it's doing before it terminates.

An Event has four public functions with which we can modify and utilize it:

  • isSet(): This checks to see if the event has been set
  • set(): This sets the event
  • clear(): This resets our event object
  • wait(): This blocks until the internal flag is set to true

Barriers

Barriers are a synchronization primitive that were introduced in the third major iteration of the Python language, and address a problem that could only be solved with a somewhat complicated mixture of conditions and semaphores.

These barriers are control points that can be used to ensure that progress is only made by a group of threads, after the point at which all participating threads reach the same point.

This might sound a little bit complicated and unnecessary, but it can be incredibly powerful in certain situations, and it can certainly reduce code complexity.

A Closer Look at the Join Method

To recap, the join method, essentially, blocks the parent thread from progressing any further until that thread has confirmed that it has terminated. This could be either through naturally coming to an end, or whenever the thread throws an unhandled exception. Let's understand this through the following example:

import threading
import time
def ourThread(i):
 print("Thread {} Started".format(i))
 time.sleep(i*2)
 print("Thread {} Finished".format(i))
def main():
 thread1 = threading.Thread(target=ourThread, args=(1,))
 thread1.start()
 print("Is thread 1 Finished?")
 thread2 = threading.Thread(target=ourThread, args=(2,))
 thread2.start()
 thread2.join()
 print("Thread 2 definitely finished")
if __name__ == '__main__':
 main()

Breaking It Down

The preceding code example shows an example of how we can make the flow of our threaded programs somewhat deterministic by utilizing this join method.

We begin by defining a very simple function called myThread, which takes in one parameter. All this function does is print out when it has started, sleep for whatever value is passed into it times 2, and then print out when it has finished execution.

In our main function, we define two threads, the first of which we aptly call thread1, and pass in a value of 1 as its sole argument. We then start this thread and execute a print statement. What's important to note is that this first print statement executes before the completion of our thread1.

We then create a second thread object, and imaginatively, call this thread2, and pass in 2 as our sole argument this time. The key difference, though, is that we call thread2.join() immediately after we start this thread. By calling thread2, we can preserve the order in which we execute our print statements, and you can see in the output that Thread 2 Is Definitely Finished does indeed get printed after thread2 has terminated.

Putting It Together

While the join method may be very useful and provide you with a quick and clean way of ensuring order within our code, it's also very important to note that you could, potentially, undo all the gains we've made by making our code multithreaded in the first place.

Consider our thread2 object in the preceding example--what exactly did we gain by multithreading this? I know that this is a rather simple program, but the point remains that we joined it immediately after we started it, and essentially, blocked our primary thread until such time as thread2 completed its execution. We, essentially, rendered our multithreaded application single threaded during the course of the execution of thread2.

A Closer Look at Locks

To recap, a lock in Python is a synchronization primitive that allows us to essentially lock our bathroom door. It can be in either a "locked" or "unlocked" state, and we can only acquire a lock while it's in an "unlocked" state.

Example

In Chapter 2, Parallelize It, we had a look at the following code sample:

Python
import threading
import time
import random
counter = 1
def workerA():
 global counter
 while counter < 1000:
   counter += 1
   print("Worker A is incrementing counter to {}".format(counter))
   sleepTime = random.randint(0,1)
   time.sleep(sleepTime)
def workerB():
 global counter
 while counter > -1000:
   counter -= 1
   print("Worker B is decrementing counter to {}".format(counter))
   sleepTime = random.randint(0,1)
   time.sleep(sleepTime)
def main():
 t0 = time.time()
 thread1 = threading.Thread(target=workerA)
 thread2 = threading.Thread(target=workerB)
 thread1.start()
 thread2.start()
 thread1.join()
 thread2.join()
 t1 = time.time()
 print("Execution Time {}".format(t1-t0))
if __name__ == '__main__':
 main()

In this preceding sample, we saw two threads constantly competing in order to increment or decrement a counter. By adding locks, we can ensure that these threads can access our counter in a deterministic and safe manner.

Python
import threading
import time
import random
counter = 1
lock = threading.Lock()
def workerA():
 global counter
 lock.acquire()
 try:
   while counter < 1000:
     counter += 1
     print("Worker A is incrementing counter to {}".format(counter))
     sleepTime = random.randint(0,1)
     time.sleep(sleepTime)
 finally:
   lock.release()
def workerB():
 global counter
 lock.acquire()
 try:
   while counter > -1000:
     counter -= 1
     print("Worker B is decrementing counter to {}".format(counter))
     sleepTime = random.randint(0,1)
     time.sleep(sleepTime)
 finally:
   lock.release()
def main():
 t0 = time.time()
 thread1 = threading.Thread(target=workerA)
 thread2 = threading.Thread(target=workerB)
 thread1.start()
 thread2.start()
 thread1.join()
 thread2.join()
 t1 = time.time()
 print("Execution Time {}".format(t1-t0))
if __name__ == '__main__':
 main()

Breaking It Down

In the preceding code, we've added a very simple lock primitive that encapsulates both of the while loops within our two worker functions. When the threads first start, they both race to acquire the lock so that they can execute their goal, and try to increment the counter to either 1,000 or -1,000 without having to compete with the other thread. It is only after one thread accomplishes their goal and releases the lock that the other can acquire that lock and try to either increment or decrement the counter.

The preceding code will execute incredibly slowly, as it's mainly meant for demonstration purposes. If you removed the time.sleep() calls within the while loop, then you should notice this code executes almost instantly.

A Closer Look at RLocks

To recap, reentrant-locks, or RLocks as they are called, are synchronization primitives that work much like our standard lock primitive, but can be acquired by a thread multiple times if that thread already owns it.

Example

Python
import threading
import time
class myWorker():
def __init__(self):
  self.a = 1
  self.b = 2
  self.Rlock = threading.RLock()
 def modifyA(self):
  with self.Rlock:
    print("Modifying A : RLock Acquired:
    {}".format(self.Rlock._is_owned()))
    print("{}".format(self.Rlock))
    self.a = self.a + 1
    time.sleep(5)
def modifyB(self):
  with self.Rlock:
    print("Modifying B : RLock Acquired:
    {}".format(self.Rlock._is_owned()))
    print("{}".format(self.Rlock))
    self.b = self.b - 1
    time.sleep(5)
def modifyBoth(self):
  with self.Rlock:
    print("Rlock acquired, modifying A and B")
    print("{}".format(self.Rlock))
    self.modifyA()
    self.modifyB()
  print("{}".format(self.Rlock))
workerA = myWorker()
workerA.modifyBoth()

Breaking It Down

In the preceding code, we see a prime example of the way an RLock works within our single-threaded program. We have defined a class called myWorker, which features four functions, these are the constructors which initialize our Rlock and our a and b variables.

We then go on to define two functions that both modify a and b respectively. These both first acquire the classes Rlock using the with statement, and then perform any necessary modifications to our internal variables.

Finally, we have our modifyBoth function, which performs the initial Rlock acquisition before calling the modifyA and modifyB functions.

At each step of the way, we print out the state of our Rlock. We see that after it has been acquired within the modifyBoth function, its owner is set to the main thread, and its count is incremented to one. When we next call modifyA, the Rlocks counter is again incremented by one, and the necessary calculations are made before modifyA then releases the Rlock. Upon the modifyA function release of the Rlock, we see the counter decrement to 1 before being immediately incremented to 2 again by our modifyB function.

Finally, when modifyB completes its execution, it releases the Rlock, and then, so does our modifyBothfunction. When we do a final print out of our Rlock object, we see that the owner has been set to 0, and that our count has also been set to 0. It is only at this point in time that another thread could, in theory, obtain this lock.

Output

The output would look as follows:

Python
$ python3.6 04_rlocks.py
    Rlock acquired, modifying A and B
    <locked
_thread.RLock object owner=140735793988544 count=1 at 
    0x10296e6f0>
    Modifying A : RLock Acquired: True
    <locked
_thread.RLock object owner=140735793988544 count=2 at 
    0x10296e6f0>
    <locked
_thread.RLock object owner=140735793988544 count=1 at 
    0x10296e6f0>
    Modifying B : RLock Acquired: True
    <locked
_thread.RLock object owner=140735793988544 count=2 at 
    0x10296e6f0>
    <unlocked
_thread.RLock object owner=0 count=0 at 0x10296e6f0>

RLocks versus Regular Locks

If we were to try and perform the same preceding program using a traditional lock primitive, then you should notice that the program never actually reaches the point where it's executing our modifyA() function. Our program would, essentially, go into a form of deadlock, as we haven't implemented a release mechanism that allows our thread to go any further. This is shown in the following code example:

Python
import threading
import time
class myWorker():
def __init__(self):
  self.a = 1
  self.b = 2
  self.lock = threading.Lock()
 def modifyA(self):
  with self.lock:
    print("Modifying A : RLock Acquired:
{}".format(self.lock._is_owned()))
    print("{}".format(self.lock))
    self.a = self.a + 1
    time.sleep(5)
def modifyB(self):
  with self.lock:
    print("Modifying B : Lock Acquired:
{}".format(self.lock._is_owned()))
    print("{}".format(self.lock))
    self.b = self.b - 1
    time.sleep(5)
def modifyBoth(self):
  with self.lock:
    print("lock acquired, modifying A and B")
    print("{}".format(self.lock))
    self.modifyA()
    print("{}".format(self.lock))
    self.modifyB()
  print("{}".format(self.lock))
workerA = myWorker()
workerA.modifyBoth()

RLocks, essentially, allow us to obtain some form of thread safety in a recursive manner without having to implement complex acquiring, and release lock logic throughout your code. They allow us to write simpler code that is easier to follow, and as a result, easier to maintain after our code goes to production.

For a closer look at the other Python synchronization primitives and to learn other techniques and tips to write efficient concurrent programs in Python, check out Packt Publishing’s book, Learning Concurrency in Python by Elliot Forbes.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Packt Publishing
United Kingdom United Kingdom
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals.

Working towards that vision, we have published over 5000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Comments and Discussions

 
-- There are no messages in this forum --