Equation Solution  
    High Performance by Design
List of Blog Contents  

 
Page: 6
 
Grandpa on Multicores (2)
 
How Good a Speedup is Good?
 
Grandpa on Multicores (1)
 
Multiprocessing is Selective
 
The Mingw Port of gfortran has a Fallback of Print Malfunction
 

  3   4   5   6   7   8   9  
 



Multiprocessing is Selective


[Posted by Jenn-Ching Luo on Aug. 06, 2009 ]

        Ed Sperling raised the question "Is Multiprocessing Possible?" in an article posted at forbes.com on Aug. 3, 2009, and raised a demand to find a way to parallelize all applications. For example, Sperling wrote:

At that time, however, multiprocessing was considered only if there was a clear benefit to splitting up an application across multiple processors or multiple machines. Now it's a requirement for all applications.

Parallel processing is selective. Whether every application is suitable for parallel processing is an arguable question.

        It could be realized that a possible way that could parallelize all applications is to distribute loops among processors (or cores), i.e., parallelization of loops. So far, we don't have a certain way to efficiently parallelize loops. Since multicores become a common feature of modern PCs, many applications have parallelized loops. A common question has been asked again and again why a "parallel version" of a program could not speed up on multicores. Parallelizing loops cannot guarantee a speedup presently. Here emphasizes the word "presently". We don't know what will happen in the future. Maybe new methodologies can efficiently parallelize loops one day.

        Parallelization of loops can yield a speedup if a program consists of a big enough loop, which can be converted into a fork-join structure. Undoubtedly, that has a potential to be efficiently executed in parallel. However, not every application has only one (or a few) big loop. If a program consists of a sequence of loops, parallelizing those loops yields a sequence of fork-join blocks. What can we expect?

        We have seen the limit of parallelization of loops. If most programmers have a demand to parallelize loops and it has become a trend, we need to look for new methodologies or hardware for the demand. As noted in Sperling's post, "[w]hether the federal funding can break the logjam and solve this problem is unknown." Indeed, there are so many uncertainties.

        Recent reports show GPU may be a good choice for parallelizing loops. Even if all the components that could efficiently parallelize loops are available, only applications that have a loop, big enough, can be benefited. Parallel processing is selective. If an application can be completed within seconds, why bother parallelizing it? If an application is highly sequential in nature, there is no much parallelism to gain a speedup. Not every application is suitable for parallel processing.