Multiprocessing Niceness in Python

Andrew Bolster

Data Science Team Lead at WhiteHat Security, Trustee at Farset Labs and Vault Artist Studios

Quick and dirty one that tripped me up.

Recently I’ve been doing lots of multiprocessing and joblib-based parallel processing, with loooong simulation times.

In an effort to make sure that my machine was still useable during these runs, I changed the ‘niceness’ value of the spawned processes… or so I thought.

import os
...
def thread_mask(args):
    # Properly Parallel RNG
    #http://stackoverflow.com/questions/444591/convert-a-string-of-bytes-into-an-int-python
    myid=current_process()._identity[0]
    np.random.seed(myid^struct.unpack("<L",os.urandom(4))[0])

    os.nice(5)

    return long_simulation(args)

First part is a handy way to make sure that your subprocess simulations actually use different random numbers…. which for Monte Carlo style simulation is pretty damned important…

But, simple enough os.nice call. I reads like it says “Set the niceness value of this process to 5”.

It does not, it Increments the niceness value. (And returns the new value)

This means that after a few repeated iterations of this simulation, my processes end up with the maximum niceness (i.e. least priority) of 19. Which is not ideal.

Simple enough fix however; swap the os.nice(5) call with:

    # Be Nice
    niceness=os.nice(0)
    os.nice(5-niceness)

Published: May 07 2014

blog comments powered by Disqus