Equation Solution  
    High Performance by Design

 
Page: 8
 
It is not late for us to think parallel
 
A Straightforward Verification of JUNE5
 
Follow-Up(I): Is Co-Array the Future of Parallel Computing?
 
OpenMP: Parallel Computing for Dummies
 
Is Co-Array the Future of Parallel Computing?
 

  5   6   7   8   9  



It is not late for us to think parallel


[Posted by Jenn-Ching Luo on Mar. 21, 2009 ]

        After reading the article "Intel wants developers to think parallel" posted on www.computerworld.com, I found I am not the only one having the concern that there are developers running into a wrong direction. In fact, there are other people also seeking an answer why their parallelable program cannot speed up, or even gets worse.

        You may be interesting to know how I could know developers failed to speed up their program. Equation.com provides a parallelable benchmark of LAIPE, which can show a highly efficient speedup. Some developers asked why the LAIPE benchmark can speed up so efficiently, while their parallel program cannot speed up. They tried to find an answer. A communication started from there, and I found they run into a wrong direction.

        Most of them thought that parallel computing is a reconstruction of a program by, for example, identifying and parallelizing loops. What they did is to insert compiler directives for a reconstruction of a program. There is no surprise their program could not speed up on mutlicores. I suggested them to rethink their problem to find a parallel algorithm for their parallel computing.

        The "think-parallel" article cites a presentation of James Reinders, Intel's direct and chief evangelist for software development products, at the SD West 2009 conference in Santa Clara, California. According to the "think-parallel" article, Reinders suggested eight rules for developers:
  1. think parallel
  2. program using abstraction
  3. program tasks, not threads
  4. design with the option of turning off concurrency
  5. avoid locks when possible
  6. use tools and libraries designed to help with concurrency;
  7. use scalable memory;
  8. and design to scale through increased workloads.
The 8 rules are also a good answer to those who cannot speed up their parallel program.

        LAIPE solver is in asynchronous parallelism, not parallelizing loops. Certianly, LAIPE solver can yield an efficient speedup. Parallel computing was initially to speed up scientific and engineering computing. I started working on parallel computing in 1987. In those days, we treated parallel computing was a mathematic question, not a programming issue. Before we had multiprocessor computers, we considered the solution of mathematic equations in a sequential way. Parallel computing is to find a method to solve mathematic equations in parallel. What we focused on was method (or parallel algorithms). What we considered was the problem itself, not on the face of a computer program.

        Nowadays, some developers run in an opposite way. Their attention is on the face of a program. They identify parallelable blocks and loops on the face of a program, and then insert compiler directives to reconstruct the program. They call that is parallel computing. We cannot say such approach cannot work at all. Skilled programmers can make some programs, for example, having a few sufficient-size loops (nesting loops), to show a speed up on 2 cores. It is not the future to employ multicores. According to the "think-parallel" article, Reinders also raised a similar concern

While we're still wresting with 'How do I use two, four, eight cores?' we're going to throw into the mix a processor with dozens of cores.

Parallel computing is not focused on the face of program. Developing an efficient parallel algorithm is more important than programming.