Yield results asynchronously in python using multiprocessing or twisted -


I have an embarrassing parallel application, where the order of results does not matter.

I have a function and a list of 1000 arguments on which it works.

  def _process_parallel (function, args_list, args_dict = {});   

I have written a few multi codes for this parallelize. ..: num_tasks = lane (args_list) num_tasks_returned_ptr = [0] def _callback (result): num_tasks_returned_ptr [0] + = 1 # All jobs must be executed asynconously apply_results = [__POOL __ send args to apply_async (function, arg , Args_dict, _callback). Args_list] Wait for # until all tasks are processed num_tasks_returned_ptr [0]

I think the memory footprint is too high, the results of the function are not left at present until all the results are processed.

What I would like to do instead, there is something where the results are not stored after the execution. Something like this:

  def _process_parallel (function, args_list, args_dict = {}): # Send all jobs to the result in somepackage.apply_async to be executed asynconously (function, arg, args_dict, _callback): yield result  

I can not seem to find a way in multiplying I've heard good things about twisted, but I'm not sure this is simple After this for work

make a dragon generator You know the results regarding the area to calculate the asynchronous methods and they come like that causes them?


Comments

Popular posts from this blog

import - Python ImportError: No module named wmi -

Editing Python Class in Shell and SQLAlchemy -

c# - MySQL Parameterized Select Query joining tables issue -