multiprocessing
multiprocessing模块是跨平台版本的多进程模块。
multiprocessing模块提供了一个Process
类来代表一个进程对象。
multiprocessing模块提供了一个Pool类,可以提供指定数量的进程供用户调用。
Python的multiprocessing
模块包装了底层的机制,提供了Queue
、Pipes
等多种方式来交换数据。
一、Process类
multiprocessing
模块提供了一个Process
类来代表一个进程对象。
实例:
from multiprocessing import Process
import os
# 子进程要执行的代码
def run_proc(name):
print('Run child process %s (%s)...' % (name, os.getpid()))
if __name__=='__main__':
print('Parent process %s.' % os.getpid())
p = Process(target=run_proc, args=('test',))
print('Child process will start.')
p.start()
p.join()
print('Child process end.')
执行结果如下:
Parent process 928.
Process will start.
Run child process test (929)...
Process end.
创建子进程时,只需要传入一个执行函数和函数的参数,创建一个Process
实例,用start()
方法启动。
join()
方法可以等待子进程结束后再继续往下运行,通常用于进程间的同步。
二、Pool类
如果要启动大量的子进程,可以用进程池的方式批量创建子进程。
multiprocessing模块提供了一个Pool类,可以提供指定数量的进程供用户调用。
当有新的请求提交到Pool中时,如果池还没有满,就会创建一个新的进程来执行请求。如果池满,请求就会告知先等待,直到池中有进程结束,才会创建新的进程来执行这些请求。
2.1 举例:
例1:
from multiprocessing import Pool
import os, time, random
def long_time_task(name):
print('Run task %s (%s)...' % (name, os.getpid()))
start = time.time()
time.sleep(random.random() * 3)
end = time.time()
print('Task %s runs %0.2f seconds.' % (name, (end - start)))
if __name__=='__main__':
print('Parent process %s.' % os.getpid())
p = Pool(4)
for i in range(5):
p.apply_async(long_time_task, args=(i,))
print('Waiting for all subprocesses done...')
p.close()
p.join()
print('All subprocesses done.')
执行结果:
Parent process 669.
Waiting for all subprocesses done...
Run task 0 (671)...
Run task 1 (672)...
Run task 2 (673)...
Run task 3 (674)...
Task 2 runs 0.14 seconds.
Run task 4 (673)...
Task 1 runs 0.27 seconds.
Task 3 runs 0.86 seconds.
Task 0 runs 1.41 seconds.
Task 4 runs 1.91 seconds.
All subprocesses done.
例2:
from multiprocessing import Pool
import time
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
print result.get(timeout=1) # prints "100" unless your computer is *very* slow
print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
it = pool.imap(f, range(10))
print it.next() # prints "0"
print it.next() # prints "1"
print it.next(timeout=1) # prints "4" unless your computer is *very* slow
result = pool.apply_async(time.sleep, (10,))
print result.get(timeout=1) # raises multiprocessing.TimeoutError
2.2 Pool类下的几个方法简介
1.apply()
函数原型:apply(func[, args=()[, kwds={}]])
该函数用于传递不定参数,同python中的apply函数一致,主进程会被阻塞直到函数执行结束(不建议使用,并且3.x以后不在出现)。
2.apply_async
函数原型:apply_async(func[, args=()[, kwds={}[, callback=None]]])
与apply用法一致,但它是非阻塞的且支持结果返回后进行回调。
Pool.apply_async is also like Python's built-in apply
, except that the call returns immediately instead of waiting for the result. An ApplyResult
object is returned.
You call its get()
method to retrieve the result of the function call. The get()
method blocks until the function is completed.
3.map()
map(func, iterable[, chunksize=None])
Pool类中的map方法,与内置的map函数用法行为基本一致,它会使进程阻塞直到结果返回。
注意:虽然第二个参数是一个迭代器,但在实际使用中,必须在整个队列都就绪后,程序才会运行子进程。
4.map_async()
函数原型:map_async(func, iterable[, chunksize[, callback]])
与map用法一致,但是它是非阻塞的。其有关事项见apply_async。
5.close()
关闭进程池(pool),使其不在接受新的任务。
6.terminal()
结束工作进程,不在处理未处理的任务。
7.join()
主进程阻塞等待子进程的退出, join方法要在close或terminate之后使用。
2.3 apply_async,apply,map,map_async 的区别
Notice also that you could call a number of different functions with Pool.apply_async
(not all calls need to use the same function).
In contrast, Pool.map
applies the same function to many arguments. However, unlike Pool.apply_async
, the results are returned in an order corresponding to the order of the arguments.
Like Pool.apply
, Pool.map
blocks until the complete result is returned.
In Python 3, a new function starmap
can accept multiple arguments.
Multi-args Concurrence Blocking Ordered-results
map no yes yes yes
apply yes no yes no
map_async no yes no yes
apply_async yes yes no no
三、Queue/Pipe
3.1 queue
class multiprocessing.
Queue
([maxsize])¶
Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe.
The usual Queue.Empty and Queue.Full exceptions from the standard library’s Queue module are raised to signal timeouts.
举例:
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
3.2 Pipe
multiprocessing.Pipe
([duplex])¶
Returns a pair (conn1, conn2)
of Connection objects representing the ends of a pipe.
If duplex is True
(the default) then the pipe is bidirectional. If duplex is False
then the pipe is unidirectional: conn1
can only be used for receiving messages and conn2
can only be used for sending messages
Pipe()函数的作用是:返回由管道连接的一对连接对象,该管道在默认情况下是双向的。例如:
from multiprocessing import Process, Pipe
def f(conn):
conn.send([42, None, 'hello'])
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print parent_conn.recv() # prints "[42, None, 'hello']"
p.join()
Pipe()返回的两个连接对象表示管道的两端。每个连接对象都有send()和recv()方法。
请注意,如果两个进程(或线程)试图同时从管道的相同一端读取或写入数据,管道中的数据可能会损坏。当然,在同时使用管道的不同端点的过程中不存在损坏的风险
感谢:
http://blog.shenwei.me/python-multiprocessing-pool-difference-between-map-apply-map_async-apply_async/