High Performance by Design
A Time to Introduce Soft Core Computing
[Posted by Jenn-Ching Luo on Feb. 08, 2012 ]
Our demand to speed up computer applications never ends. Most of us had experiences with dial-up modem. What a slow connection dial-up modem was! It was impossible to watch movies on the internet with a dial up modem. Now, we can enjoy fast internet connections. A good example is broadband service. New technologies make our daily life easier and more convenient.
This post introduces another new technology, neuLoop for soft core computing. neuLoop also meets our demand to speed up applications on modern computers.
neuLoop is in a technology different from other well-known programming tools, e.g., OpenMP. Most of such well-known tools are based on threading. neuLoop is not based on threading, but runs on soft cores.
Soft core is a virtual concept of physical core. That sits between physical cores and application. neuLoop opens the door to soft core computing.
Programming in soft cores is quite different from the technologies that are based on a team of cooperative tasks (e.g., threading). For example, in threading, program itself creates a team of threads to share a computing. We used to identify such application as a parallel program; while in soft cores, program never creates a team of threads. Program in neuLoop is written in a way that soft cores can read it. Then, soft cores can execute the program simultaneously so as to speed up the application. That shows a fundamental difference between programming in soft cores and threading.
neuLoop (new loop) provides simple syntax to rewrite a loop into a way that soft cores can read it, and also provides two types of soft core at the present time, homogeneous and heterogeneous. Preliminary tests show that neuLoop takes less overhead, as compared with OpenMP. In this introduction, we are going to see some overhead comparisons.
Here uses the example in the post "Parallelizing Loops on the Face of a Program is not enough for Multicore Computing" for comparisons. The example is matrix multiplication, which was parallelized with OpenMP previously. Timing results with OpenMP are also published.
This post re-writes the example program in neuLoop, and collects timing results. We can compare the timing results to see overhead difference. The FORTRAN program is rewritten as:
TEST PLATFORM AND TIMING RESULTS
Since for comparison, the test platform is the same as the one to implement examples in the article "Parallelizing Loops on the Face of a Program is not enough for Multicore Computing". The computer system is a SunFire v40z with quad dual-core opteron 870 on Windows 2008 R2, a total of 8 cores. The example program was compiled against GFORTRAN 4.7 without optimization (e.g., with option -O0).
As introduced previously, neuLoop has two types of soft core. First, we link the example program against homogeneous cores. The timing results are as:
From the timing results, we can see that two soft cores can reduce the elapsed time from 171.54 seconds to 87.42 seconds, which shows a speedup 1.96x and 98.11% efficiency; four soft cores can cut the elapsed time from 171.54 seconds to 48.27 seconds, a speedup 3.55x which is equivalent to a 88.84% efficiency; eight cores can reduce the elapsed time from 171.54 seconds to 28.89 seconds, a speedup 5.94x. From the timing results, we can see that soft cores speed up the computing.
Speedup is not the focus point here.
Our interest is comparison of overhead. Let us copy the OpenMP timing results, from the article "Parallelizing Loops on the Face of a Program is not enough for Multicore Computing", in the following:
We compare the overhead, and we can see soft cores consistently run faster than OpenMP. For example, one soft core can complete the computing in 171.54 seconds; while one OpenMP thread takes 193.83 seconds. We have the important finding that soft cores require less overhead than threading. Program in neuLoop runs faster than program with OpenMP.
We can compare more timing, all of which show consistent results. For example, two soft cores take 87.42 seconds; while two OpenMP threads take 98.61 seconds. We can see again that soft cores require less overhead.
We can compare more. Four soft cores in neuLoop take 48.27 seconds; while four OpenMP threads take 52.82 seconds. Consistently, soft cores take less overhead.
More comparisons also show that eight soft cores complete the example in 28.89 seconds; while eight OpenMP threads take 37.33 seconds. The timing results show a consistent result that soft cores in neuLoop require less overhead. Soft cores show advantage.
WHICH TYPE OF SOFT CORE IS BETTER
As mentioned previously, neuLoop has two types of soft core. The above example links against homogeneous soft cores. How is performance of heterogeneous cores? We re-link the program against the heterogeneous soft cores, and we can see performance. A set of timing results is as follow:
By the timing results, we can see different performance between homogeneous and heterogeneous cores. But, it is too early to draw a line which type of soft cores is better for parallel processing.
The above timing results show that, when using more cores, homogeneous cores may yield a better speedup; heterogeneous cores are most well suitable for a small number of cores. However, that is not a conclusion applicable to every example and every hardware platform.
In this example, when using two cores, heterogeneous soft cores can yield an almost perfect speedup 1.99x, but homogeneous soft cores yield a speedup 1.96x. When using two cores, heterogeneous core is more efficient than homogeneous core.
However, eight homogeneous cores can yield a better speedup 5.94x; while eight heterogeneous cores yield a speedup 5.16x. Each type of soft core has the best environment to apply. No one dominates the other all the time. This writer will post more on soft core computing.