Categories
OpenStack Tech

Python multithreading

As commented in my introductory post, this blog pretends to be a kind of work diary, and sometimes will be the shoulder to cry on. In relation to threads, regardless of the programming language, a programmer needs to be mentally strong.

Jokes aside, I would like first to introduce this post with some links to start learning what threading is in Python:

Why do we need multithreading in Python?

If Python code runs concurrently and thus only one thread is executed at a time, what is the purpose of having different threads? I/O bound tasks or the time a thread spends waiting for input/output operations to be completed. In the x86 architecture, the I/O subsystem is physically represented by the Northbridge and the Southbridge, the first one connected directly to the processor though the front-side bus (FSB). This I/O subsystem holds the memory controller, the system bus (PCI-E), disk controllers (ATA, floppy), etc.

The I/O tends to take a significant amount of time and during this time the CPU is just waiting for an interruption. From Python wiki:

“Note that potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL”

That means the GIL can be released during the I/O processing (or thread-safe C extensions in CPython) allowing the execution of other CPython bytecode. Although nowadays is 10 years old, “Understanding the Python GIL” is still a good reference to understand how the GIL works with threads.

Marcus McCurdy has an interesting article1, with examples, proving that in I/O extensive tasks, Python multithreading could be beneficial.

Python threads vs user level threads.

Those concepts are different, regardless of being called threads. Python threads are system-level based. Python threads use the kernel scheduler for preemptive multitasking. Although only one thread can run in the interpreter at a time, the scheduling is done by the system.

User level threads, on the other side, are a language level implementation of threads. The scheduling is non-preemptive; that means the threads are (must be) cooperative: there is no implicit scheduling and must be done manually. It’s the running thread which decides when to yield the execution to the next one2.

What does it means? You need to be careful when implementing code using user level threads, because aren’t operating system threads but coroutines manually scheduled by themselves. A selfist user level thread will starve the others in the pool.

There main green threads libraries in Python are:

Both are very similar although the API is not the same. Apart from this, I admit that don’t know the differences between them.

When sharing is mandatory.

Let me write a very simple script, where several threads are spawned executing a simple loop, incrementing a counter.

import eventlet
import time
import threading

THREADS_OR_GREENTHREADS = 'threads'
# THREADS_OR_GREENTHREADS = 'greenthreads'
num_threads = 5
threads = {}
greenthreads_pool = eventlet.GreenPool(num_threads)
shared_data = {idx: 0 for idx in range(num_threads)}
running = True
_sleep = (time.sleep if THREADS_OR_GREENTHREADS == 'threads' else
          eventlet.sleep)

def main_loop(data, index):
    global running
    print('  - Starting thread number %s' % index)
    while running:
        data[index] += 1
        # _sleep()
    print('  - Stopping thread number %s' % index)

def run_threads():
    for idx in range(num_threads):
        threads[idx] = threading.Thread(target=main_loop,
                                        args=(shared_data, idx))
        threads[idx].start()

def stop_threads():
    for idx in range(num_threads):
        threads[idx].join()

def run_greenthreads():
    for idx in range(num_threads):
        greenthreads_pool.spawn(main_loop, shared_data, idx)

def stop_greenthreads():
    greenthreads_pool.waitall()

print('Start running threads')
locals()['run_' + THREADS_OR_GREENTHREADS]()
_sleep(5)
running = False
locals()['stop' + THREADS_OR_GREENTHREADS]()

print(shared_data)
print(sum(shared_data.values()) // 10**3)
exit(0)

With THREADS_OR_GREENTHREADS we can choose between threading and eventlet (green threads). But the same code executed with green threads does no end (actually it spawns only one single thread). The main thread creates num_threads threads and in L42 (sleep command), yields the execution to the next thread. Bare in mind that user level threads (green threads) are deterministic: the thread creation order in the pool defines the execution order. That’s why the message “–> Starting thread number 0” is printed, because the next thread in the pool, apart from the main one, is thread with index 0.

However, without L20 uncommented, thread with index 0 never returns the executor to the pool and even the main thread is blocked. Just a reminder: although I called “main thread” to the initial one, there is no concept prioritization and each green thread in the pool is scheduled sequentially and always in the same order (determinism).

To allow other green threads to acquire the GIL, the executing thread needs to yield it, in this case by sending the thread to sleep.

Multithreading in OpenStack.

In OpenStack there are several processes making use eventlet3. The DHCP agent4 for example, has a pool of green threads processing the updates received for the different resources (ports, networks and subnets).

In a follow-up post, I’ll write about possible alternatives like, for example, replacing multithreading in agent processes (OVS agent, DHCP agent, L3 agent) with multiprocessing.

Leave a Reply

Your email address will not be published.