UPDATE: a better parallel algorythm will be included in a future version of DEoptim, so I've removed my package from CRAN. You can still use the code from this post, but keep Josh's comments in mind.
Last night I was working on a difficult optimization problems, using the wonderful DEoptim package for R. Unfortunately, the optimization was taking a long time, so I thought I'd speed it up using a foreach loop, which resulted in the following function:
Here's what's going on: I divide the bounds for each parameter into n segments, and use a foreach loop to run DEoptim on each segment, collect the results of the loop, and then return the optimization results for the segment with the lowest value of the objective function. Additionally, I defined a "parDEoptim" class to make it easier to combine the results during the foreach loop. All of the work is still being done by the DEoptim algorithm. All I've done is split up the problem into several chunks.
Here is an example, straight out of the DEoptim documentation:
In theory, on a 20-core machine, this should run a bit faster than the serial example. Note that you may need to set itermax for the parallel run at a higher value than (itermax for the serial run)/(number of segments), as you want to make sure the algorithm can find the minimum of each segment. Also note that, in this example, there are 20 segments on the interval c(-10,-10) to c(10,10), which means that 2 of the segments have boundaries at c(1,1), which is the global minimum of the function. The DEoptim algorithm has no trouble finding a solution at the boundary of the parameter space, which is why it's so easy to parallelize.
Rumor has it that the next version of DEoptim will include foreach parallelization, but if you can't wait until then, I rolled up the above function into an R package and posted it to CRAN. Let me know what you think!