How fast we are?
It is known that complex simulations require an outstanding computing power. In other words, even having the latest hardware with the top performance specifications, probably your problem will last for hours, days or even several weeks to be solved. That is an annoying point when your research is in a hurry, as often happens.
GPMagnet goes one step beyond in Computational Physics speeding up the whole process by using wisely the Parallel Computing. Our previous simulation code, written in Fortran and not parallelized at all, was so slow that GPMagnet is nowadays about 75 times faster on average. The larger the sample size, the greater the advantage of a parallelized solution.
As you can see in the graph above, GoParallel has definetely found the solution to those heavy scientific computations. Compared to OOMMF, a reference simulation software widely used and tested, a speedup of more than x15 on average has already been achieved, reaching up to a factor of x30 and even more for larger sizes. Notwithstanding, our software engineers work hard to develop optimizations that do improve the simulation speed periodically.
How we got it?
The trick resides on Graphical Processor Units or GPUs. For example, the latest GPUs from Nvidia consists on 512 computational cores that can be programmed using CUDA. This highly specialized hardware deliver a huge computational power at a reduced cost. On one hand, this specialized hardware is cheaper than a typical super-computer, whereas on the other hand the power consumption is lower too.
But hardware is not everything. Make the most of the GPUs by using parallel programming is not a piece of cake. High-skilled programmers are essential to get a really significant speed-up since a change of mind is required to develop CUDA applications.
Therefore, to use GPMagnet and solve quickly and efficiently your micromagnetic problems, you will need a system with Nvidia GPUs (check System Requirements). You can use your own hardware or you can acquire the latest models here.