One of the methods of exchanging data between processes with the multiprocessing module is directly shared memory via multiprocessing.Value. As any method that's very general, it can sometimes be tricky to use. I've seen a variation of this question asked a couple of times on StackOverflow:
I have some processes that do work, and I want them to increment some shared counter because [... some irrelevant reason ...] - how can this be done?
The wrong way
And surprisingly enough, some answers given to this question are wrong, since they usemultiprocessing.Value incorrectly, as follows:
import time
from multiprocessing import Process, Value
def func(val):
for i in range(50):
time.sleep(0.01)
val.value += 1
if __name__ == '__main__':
v = Value('i', 0)
procs = [Process(target=func, args=(v,)) for i in range(10)]
for p in procs: p.start()
for p in procs: p.join()
print v.value
This code is a demonstration of the problem, distilling only the usage of the shared counter. A "pool" of 10 processes is created to run the func function. All processes share a Value and increment it 50 times. You would expect this code to eventually print 500, but in all likeness it won't. Here's some output taken from 10 runs of that code:
> for i in {1..10}; do python sync_nolock_wrong.py; done
435
464
484
448
491
481
490
471
497
494
Why does this happen?
I must admit that the documentation of multiprocessing.Value can be a bit confusing here, especially for beginners. It states that by default, a lock is created to synchronize access to the value, so one may be falsely led to believe that it would be OK to modify this value in any way imaginable from multiple processes. But it's not.
Explanation - the default locking done by Value
This section is advanced and isn't strictly required for the overall flow of the post. If you just want to understand how to synchronize the counter correctly, feel free to skip it.
The locking done by multiprocessing.Value is very fine-grained. Value is a wrapper around a ctypesobject, which has an underlying value attribute representing the actual object in memory. All Value does is ensure that only a single process or thread may read or write this value attribute simultaneously. This is important, since (for some types, on some architectures) writes and reads may not be atomic. I.e. to actually fill up the object's memory, the CPU may need several instructions, and another process reading the same (shared) memory at the same time could see some intermediate, invalid state. The built-in lock of Value prevents this from happening.
However, when we do this:
val.value +=1
What Python actually performs is the following (disassembled bytecode with the dis module). I've annotated the locking done by Value in #<-- comments:
0 LOAD_FAST 0 (val)
3 DUP_TOP
#<--- Value lock acquired
4 LOAD_ATTR 0 (value)
#<--- Value lock released
7 LOAD_CONST 1 (1)
10 INPLACE_ADD
11 ROT_TWO
#<--- Value lock acquired
12 STORE_ATTR 0 (value)
#<--- Value lock released
So it's obvious that while process #1 is now at instruction 7 (LOAD_CONST), nothing prevents process #2 from also loading the (old) value attribute and be on instruction 7 too. Both processes will proceed incrementing their private copy and writing it back. The result: the actual value got incremented only once, not twice.
The right way
Fortunately, this problem is very easy to fix. A separate Lock is needed to guarantee the atomicity of modifications to the Value:
import time
from multiprocessing import Process, Value, Lock
def func(val, lock):
for i in range(50):
time.sleep(0.01)
with lock:
val.value += 1
if __name__ == '__main__':
v = Value('i', 0)
lock = Lock()
procs = [Process(target=func, args=(v, lock)) for i in range(10)]
for p in procs: p.start()
for p in procs: p.join()
print v.value
Now we get the expected result:
> for i in {1..10}; do python sync_lock_right.py; done
500
500
500
500
500
500
500
500
500
500
A value and a lock may appear like too much baggage to carry around at all times. So, we can create a simple "synchronized shared counter" object to encapsulate this functionality:
import time
from multiprocessing import Process, Value, Lock
class Counter(object):
def __init__(self, initval=0):
self.val = Value('i', initval)
self.lock = Lock()
def increment(self):
with self.lock:
self.val.value += 1
def value(self):
with self.lock:
return self.val.value
def func(counter):
for i in range(50):
time.sleep(0.01)
counter.increment()
if __name__ == '__main__':
counter = Counter(0)
procs = [Process(target=func, args=(counter,)) for i in range(10)]
for p in procs: p.start()
for p in procs: p.join()
print counter.value()
Bonus: since we've now placed a more coarse-grained lock on the modification of the value, we may throw away Value with its fine-grained lock altogether, and just use multiprocessing.RawValue, that simply wraps a shared object without any locking.
转自:http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing
--end
相关推荐
Python_multiprocessing_6_共享内存_shared_memory_(多进程_多核运算_教学教程tutori
python-multiprocessing-enginemap_jobs 并行化作业,返回一个DataFrame或Series indicators = map_jobs ( func = handle_task , molecules = ( 'jobs' , jobs ), threads = 8 , batches = 4)map_reduce_jobs 并行...
Python的`multiprocessing`模块是用于创建和管理进程的核心库,它使得在Python程序中利用多核处理器的优势成为可能。这个模块提供了进程间通信(IPC)的机制,以及类似于线程池的进程池(Pool)功能,使得并行处理...
Python_multiprocessing_4_效率对比_multithreading,_multiprocessing_co
Python的multiprocessing模块不但支持多进程,其中managers子模块还支持把多进程分布到多台机器上。一个服务进程可以作为调度者,将任务分布到其他多个机器的多个进程中,依靠网络通信。想到这,就在想是不是可以...
Python的multiprocessing模块不但支持多进程,其中managers子模块还支持把多进程分布到多台机器上。一个服务进程可以作为调度者,将任务分布到其他多个机器的多个进程中,依靠网络通信。 想到这,就在想是不是可以...
### Python基于multiprocessing的多进程创建方法 在Python中,多进程编程是处理高计算负载任务的一种有效方式,尤其适用于I/O密集型或CPU密集型的任务。`multiprocessing`模块是Python内置的一个库,用于支持跨平台...
本文将详细探讨如何使用Python的`_multiprocessing`模块来实现多任务处理,以及在这个过程中可能遇到的一些挑战和解决方案。 首先,`_multiprocessing`是Python的多进程库,它提供了跨平台的方式来创建子进程。与多...
Python_multiprocessing_7_lock_锁__(多进程_多核运算_教学教程tutorial)
本文实例讲述了python使用multiprocessing模块实现带回调函数的异步调用方法。分享给大家供大家参考。具体分析如下: multipressing模块是python 2.6版本加入的,通过这个模块可以轻松实现异步调用 from ...
Python_multiprocessing_3_queue_进程输出_(多进程_多核运算_教学教程tutorial)
Python_multiprocessing_2_创建进程_(多进程_多核运算_教学教程tutorial)
总的来说,"Distributed Computing with Python"涵盖了Python在分布式计算领域的核心知识点,包括各种库的使用方法和实战技巧。通过深入学习,你将能够构建和优化自己的分布式系统,应对大数据和高性能计算的挑战。
《Mastering Large Datasets with Python》是一本专为处理大规模数据集而设计的书籍,它主要探讨了如何利用Python语言高效地处理大数据。在当前的数据驱动时代,掌握处理大规模数据集的能力是至关重要的,这本书正是...
Python_multiprocessing_5_进程池_pool_(多进程_多核运算_教学教程tutorial)
python基础_32_Python_multiprocessing_1_什么是多进程_(多进程_多核运算_教学教程tutori
Python的`multiprocessing`模块是实现多进程编程的关键工具,它允许我们创建并管理多个独立的进程,每个进程都有自己的内存空间。在多进程环境中,数据共享是一个常见需求,但因进程间的内存隔离,直接共享变量是不...
### Python使用multiprocessing创建进程的方法 在Python中,`multiprocessing`模块提供了高级接口来创建和管理进程。通过使用这个模块,我们可以轻松地并行执行任务,从而提高程序的执行效率。下面详细介绍如何使用...