Code Optimization: Speed up your code by rearranging data access

Posted on April 22nd, 2012
Previous Article :: Next Article

Speed is of the essence when it comes to scientific computing. But how do you get your numerical program to run faster? Well, there are many different ways (putting parallel processing or multithreading aside). Besides optimizing the algorithm itself, one of the most effective way is by considering memory access. When a code runs, its data is stored in the computer’s RAM. However, the CPU cannot directly access the RAM. In order to perform a computation, the data first needs to be copied to the cache, a small local memory the CPU has access to. Since the process of copying data between RAM and the cache takes time, the copying operation is designed to grab a larger block surrounding the data of interest. The hope is that the successive calculation will utilize data near-enough to the original piece, and as such, the new data will already reside in the cache. If not, a cache miss results, and another RAM to cache copying operation is performed.

Note, particle codes often have two distinct set of memory optimization requirements: one for the field variables, and one for the particles. Here we consider the fields. The latter one is the topic of a follow up post on efficient particle data structures.

The importance of consecutive memory blocks

In practical terms, this means that when writing code, we need to make sure data components are as much next to each other in memory as possible. This will reduce the number of required RAM to cache transfers. We can illustrate this using a simple example. Let’s say we have a large block of numbers and we want to add them all together. The code will look something like this:

double array[] = new double[size];
 
for (i=0;i<size;i++)
    array[i]=i;
 
double sum=0;
for (i=0;i<size;i++)
   sum+=array[i];

This code will give us the optimized performance: the data is located in a single consecutive memory block array. But this isn’t the only way to organize the data. Instead of using a single block, the data could be scattered throughout the RAM and referenced via a linked list. Java makes it easy to utilize linked lists,

LinkedList<Double> linked_list = new LinkedList<Double>();
 
for (i=0;i<size;i++)
    linked_list.add(new Double(i));
 
double sum=0;
for (i=0;i<size;i++)
   sum+=linked_list.get(i);

Now, in this case we had to introduce a bit of additional overhead since we are accessing the data via the get getter, and the data is stored as the Double object instead of double primitive. To make a more faithful comparison, we can rewrite the first array example using the ArrayList,

ArrayList<Double> array_list = new ArrayList<Double>(size);
 
for (i=0;i<size;i++)
    array_list.add(new Double(i));
 
double sum=0;
for (i=0;i<size;i++)
    sum+=array_list.get(i);

Although the three codes look quite similar, their performance is anything but! To measure speed, I recorded the start and end time for the summation loop. The actual source code is listed below. In addition, to improve accuracy, the time was measured over 500 iterations of the loop. For size=50*50*50=125,000, the times on my 1.7GHz Intel i7 laptop are:

Array took 0.097051789 seconds
ArrayList took 0.411013258 seconds
LinkedList took 5692.67988 seconds

In other words, the linked list, which accesses data scattered through out the RAM and as such results in a cache miss on each access, ran 58,000x slower than the consecutive array! Instead of completing in a fraction of a second, the linked list case took almost 95 minutes! (I must admit I did not actually have the patience to wait this long, I extrapolated this time by running only 20 iterations and multiplying the time by 25). This slow down can hardly be attributed to the additional overhead of not using data primitives: the code with the ArrayList took only 3/10ths of a second longer over the code with the primitives.

Real-world example: 3D Finite Difference Laplace Solver

The above example is a bit convoluted, of course. Using a linked list for a simple array like this would be a really silly idea. Let’s now consider a more realistic example, a finite difference solver. Here we consider an extremely simple problem given by the Laplace equation \(\nabla^2 x=0\) and Dirichlet boundaries. Equations of this kind arise commonly in fluid dynamics, diffusion, and electrostatic problems. The discretization of this equation using the finite difference approach and assuming constant mesh spacing \(dx=dy=dz=dh\) is

$$x_{i-1,j,k}+x_{i+1,j,k}+x_{i,j-1,k}+x_{i,j+1,k}+x_{i,j,k-1}+x_{i,j,k+1} – 6x_{i,j,k} = 0
$$

Then using the Gauss-Seidel method, the solver pseudocode becomes:

<for number of iterations / until convergence>
    <for all non-boundary nodes>
        x[i][j][k] = (1/6.0)*(x[i-1][j][k] + x[i+1][j][k] + 
                              x[i][j-1][k] + x[i][j+1][k] + 
                              x[i][j][k-1] + x[i][j][k+1])

This algorithm gives a solution such as the one shown in Figure 1. For this plot, \(x=100\) was set on the \(i=0\) face and \(x=0\) everywhere else.

Figure 1. Solution on the 50x50x50 mesh after 500 iterations.

The question is, what is the best way to iterate through the nodes? There are basically two options: i->j->k, or k->j->i. There is no universally correct ordering – the correct choice depends on the way the data is stored in RAM. In this example, the data allocation is such that x(i,j,k)=x[i][j][k],

double x3[][][] = new double[nx][][];
for (int i=0;i<nx;i++)
{
    x3[i] = new double[ny][];
    for (int j=0;j<ny;j++)
        x3[i][j] = new double[nz];	
}

First, let’s consider the k->j->i ordering. The solver is then written as:

initData3D(x3);	
for (it=0;it<num_it;it++)
{
    for (k=1;k<nz-1;k++)
        for (j=1;j<ny-1;j++)
            for (i=1;i<nx-1;i++)
	    {
	        x3[i][j][k] = (1/6.0)*(x3[i-1][j][k]+x3[i+1][j][k]+
		               x3[i][j-1][k]+x3[i][j+1][k]+
			       x3[i][j][k-1]+x3[i][j][k+1]);
	    }
}

For a 100x100x100 array size, this case takes 12.69 seconds to complete on my computer. Next let’s try a simple change: let’s try switching the ordering of the three loops to i->j->k. This gives the following code:

initData3D(x3);	
for (it=0;it<num_it;it++)
{
    for (i=1;i<nx-1;i++)
        for (j=1;j<ny-1;j++)
            for (k=1;k<nz-1;k++)
	    {
	        x3[i][j][k] = (1/6.0)*(x3[i-1][j][k]+x3[i+1][j][k]+
		               x3[i][j-1][k]+x3[i][j+1][k]+
			       x3[i][j][k-1]+x3[i][j][k+1]);
	    }
}

The only difference between the two codes is the loop ordering. Yet this second case takes a substantially less time to complete: only 3.42 seconds. That’s over 3x speed up from the initial implementation! The speed up becomes even more pronounced as the number of nodes is increased. For a 150x150x150 mesh, the times are 75.73 seconds and 11.68 seconds, a 6.48 times difference. For a 200x200x200 mesh, the speed up reaches 7 times. If this algorithm were typical of the entire simulation program, this speed could result in a code previously taking a week to complete in just one day. The speedup is due to the optimized memory access. The smallest block is the [k] block, which is declared as a single consecutive array. As such, we are better off completing this block before moving to another [i] or [j] location.

Flat vs. 3D array

Three dimensional data can also be stored as a single one-dimensional array. Such a flat array approach becomes handy when matrix operations need to be considered. For instance, the above Laplace problem could also be written as \(\mathbf{L}\vec{x} = 0\), where \(\mathbf{L}\) is the coefficient matrix, containing the discretization of the Laplacian. The size of the flat array is nx*ny*nz. The three dimensional index can be mapped to the flat index using a scheme such us \(u=i*ny*nz + j*nz + k\). The above example, with the faster i->j->k ordering, can then be written as

double x1[] = new double[nx*ny*nz];
initData1D(x1);	
 
/*precompute node offsets*/
int node_offset[] = nodeOffsets();
 
start = System.nanoTime();
for (it=0;it<num_it;it++)
{
    for (i=1;i<nx-1;i++)
        for (j=1;j<ny-1;j++)
            for (k=1;k<nz-1;k++)
	    {
	        int u=IJKtoU(i,j,k);
	        sum=0;
                for (int t=0;t<6;t++) sum+=x1[u+node_offset[t]];
 
                x1[u]=(1/6.0)*sum;
	    }
}

In this example, an additional optimization was performed by pre-computing the offsets to the neighbor nodes. These offsets are used to refer to the nodes comprising the standard finite difference stencil. With u=[i][j][k], data at [i-1][j][k] is located at u-ny*nz, using the indexing scheme from above. Similarly, data at [i][j+1][k] is located at u+nz. These node offsets are function of mesh topology and do not change with u. They can thus be precomputed. This will save 6*nx*ny*nz calculations per iteration, which for large meshes can be significant. The computational times for this 1D flat case are listed below, along with the other 2 cases and speed ratios. Although this 1D flat case runs slower than the 3D array, the slow down is nowhere near as dramatic as what was seen with the reversed loop ordering.

 nn |   nodes |	3D,ijk | 3D,kji | 1D,ijk |   r1 |   r3			
------------------------------------------------------
 50 |  125000 |	  0.42 |   1.08	|   0.61 | 2.57 | 1.45
100 | 1000000 |	  3.42 |  12.69	|   5.41 | 3.71	| 1.58
150 | 3375000 |	 11.68 |  75.73	|  20.51 | 6.48	| 1.75
200 | 8000000 |	 27.74 | 195.91 |  54.14 | 7.06	| 1.95

These timing studies are also shown graphically in Figure 2 below. Here the two blue curves correspond to the cases with the i->j->k ordering while the red curve is for k->j->i. The dashed blue line shows the flattened 1D array. Comparing the difference between the blue and the red line shows just how important selecting the correct loop ordering really is!

Figure 2. Simulation time vs. mesh size for the three studied cases.

Summary

The way data is organized in memory can have a huge impact on code performance. In this article several alternative methods for storing and accessing data were considered. Using a linked list instead of a consecutive data array resulted in the same algorithm taking 58,000x longer! Even for a more realistic example concerning a finite difference solver, simply rearranging the loops via which the data is accessed resulted in a 7x speed up for large meshes. Yet, the real lesson here is simply the importance of considering performance during code design. Real simulation codes are orders of magnitude more complex than the simple test cases shown here. In such codes, often several different algorithms compete for memory access, and rearranging one can have unintended consequences on the performance of the others. In large codes, your best bet is to use profiling, a timing tool found in most modern development environments (such as Visual Studio, Eclipse, or Netbeans). Profiling will allow you to determine which functions are taking the longest time. In addition, timing studies obtained from the profiler or from a direct instrumentation come in handy in determining just how effective, if at all, an algorithm rewrite was. From my own experience, sometimes rewrites which I thought would definitely improve performance, ended up doing exactly the opposite.

Complete Source Code Listing

  1. package speed;
  2.  
  3. import java.io.FileWriter;
  4. import java.io.PrintWriter;
  5. import java.util.ArrayList;
  6. import java.util.LinkedList;
  7.  
  8. /* ************************************************************
  9.  * Code to test the performance of a finite difference 
  10.  * Poisson solver based on memory access ordering
  11.  * 
  12.  * For more information see:
  13.  * https://www.particleincell.com/2012/memory-code-optimization/
  14.  * 
  15.  * ***********************************************************/
  16. public class Speed 
  17. {
  18.     final static int nn = 50;
  19.     final static int num_it = 500;
  20.     final static int nx=nn,ny=nn,nz=nn;
  21.  
  22.     public static void main(String[] args) 
  23.     {
  24.         int i,j,k;
  25.         int it;
  26.         long start, end;
  27.  
  28.         /*array*/
  29.         int size = nx*ny*nz;
  30.         double array[] = new double[size];
  31.  
  32.         for (i=0;i<size;i++)
  33.             array[i]=i;
  34.  
  35.         double sum=0;
  36.         start = System.nanoTime();
  37.         for (it=0;it<num_it;it++)
  38.             for (i=0;i<size;i++)
  39.                 sum+=array[i];
  40.         end = System.nanoTime();
  41.         System.out.println("Array took "+(1e-9)*(end-start)+" seconds");
  42.  
  43.         /*array list*/
  44.         ArrayList<Double> array_list = new ArrayList<Double>(size);
  45.  
  46.         for (i=0;i<size;i++)
  47.             array_list.add(new Double(i));
  48.  
  49.         sum=0;
  50.         start = System.nanoTime();
  51.         for (it=0;it<num_it;it++)
  52.             for (i=0;i<size;i++)
  53.                 sum+=array_list.get(i);
  54.         end = System.nanoTime();
  55.         System.out.println("ArrayList took "+(1e-9)*(end-start)+" seconds");
  56.  
  57.         /*linked list*/
  58.         sum = 0;
  59.         LinkedList<Double> linked_list = new LinkedList<Double>();
  60.         for (i=0;i<size;i++)
  61.             linked_list.add(new Double(i));
  62.  
  63.         start = System.nanoTime();
  64.         for (it=0;it<20;it++)
  65.         {
  66.             for (i=0;i<size;i++)
  67.                 sum+=linked_list.get(i);
  68.         }
  69.         end = System.nanoTime();
  70.         System.out.println("LinkedList took "+(1e-9)*(num_it/it)*(end-start)+" seconds");
  71.  
  72.  
  73.         start = System.nanoTime();
  74.         for (it=0;it<num_it;it++)
  75.             for (i=0;i<size;i++)
  76.                 array[i]=i;
  77.         end = System.nanoTime();
  78.         System.out.println("Array took "+(1e-9)*(end-start)+" seconds");
  79.  
  80.         /*data structures for 3D and 1D approaches*/
  81.         double x3[][][] = allocate3D();
  82.         double x1[] = allocate1D();
  83.  
  84.         /*case 1, 3D k->j->i*/
  85.         initData3D(x3);    
  86.         start = System.nanoTime();
  87.         for (it=0;it<num_it;it++)
  88.         {
  89.             for (k=1;k<nz-1;k++)
  90.                 for (j=1;j<ny-1;j++)
  91.                     for (i=1;i<nx-1;i++)
  92.                     {
  93.                         x3[i][j][k]=(1/6.0)*(x3[i-1][j][k]+x3[i+1][j][k]+
  94.                                              x3[i][j-1][k]+x3[i][j+1][k]+
  95.                                              x3[i][j][k-1]+x3[i][j][k+1]);
  96.                     }
  97.         }
  98.         end = System.nanoTime();
  99.         System.out.println("Case 1, 3D k->j->i took "+(1e-9)*(end-start)+" seconds");
  100.         output3D("case1.dat",x3);
  101.  
  102.         /*case 2, 3D i->j->k*/
  103.         initData3D(x3);    
  104.         start = System.nanoTime();
  105.         for (it=0;it<num_it;it++)
  106.         {
  107.             for (i=1;i<nx-1;i++)
  108.                 for (j=1;j<ny-1;j++)
  109.                     for (k=1;k<nz-1;k++)
  110.                     {
  111.                         x3[i][j][k]=(1/6.0)*(x3[i-1][j][k]+x3[i+1][j][k]+
  112.                                              x3[i][j-1][k]+x3[i][j+1][k]+
  113.                                              x3[i][j][k-1]+x3[i][j][k+1]);
  114.                     }
  115.         }
  116.         end = System.nanoTime();
  117.         System.out.println("Case 2, 3D i->j->k took "+(1e-9)*(end-start)+" seconds");
  118.         output3D("case2.dat",x3);
  119.  
  120.         /*case 3, 1D flat array*/
  121.         initData1D(x1);    
  122.         /*precompute node offsets*/
  123.         int node_offset[] = nodeOffsets();
  124.  
  125.         start = System.nanoTime();
  126.         for (it=0;it<num_it;it++)
  127.         {
  128.             for (i=1;i<nx-1;i++)
  129.                 for (j=1;j<ny-1;j++)
  130.                     for (k=1;k<nz-1;k++)
  131.                     {
  132.                         int u=IJKtoU(i,j,k);
  133.                         sum=0;
  134.                         for (int t=0;t<6;t++) sum+=x1[u+node_offset[t]];
  135.  
  136.                         x1[u]=(1/6.0)*sum;
  137.                     }
  138.         }
  139.         end = System.nanoTime();
  140.         System.out.println("Case 3, flat 1D took "+(1e-9)*(end-start)+" seconds");
  141.         output1D("case3.dat",x1);
  142.     }
  143.  
  144.     /**allocates 3D nn*nn*nn array*/
  145.     static double[][][] allocate3D()
  146.     {
  147.         double x[][][]=new double[nx][][];
  148.  
  149.         for (int i=0;i<nx;i++)
  150.         {
  151.             x[i] = new double[ny][];
  152.             for (int j=0;j<ny;j++)
  153.                 x[i][j] = new double[nz];    
  154.         }
  155.         return x;        
  156.     }
  157.  
  158.     /**resets data, assumes uniform 3D nn*nn*nn mesh*/
  159.     static void initData3D(double x[][][])
  160.     {
  161.         int i,j,k;
  162.  
  163.         /*set everything to zero*/
  164.         for (i=0;i<nx;i++)
  165.             for (j=0;j<ny;j++)
  166.                 for (k=0;k<nz;k++)
  167.                     x[i][j][k]=0;
  168.  
  169.         /*set default value of 100 on x=0 plane*/
  170.         for (j=0;j<ny;j++)
  171.             for (k=0;k<nz;k++)
  172.                 x[0][j][k]=100;
  173.     }
  174.  
  175.     /**allocates 1D nn*nn*nn array*/
  176.     static double[] allocate1D()
  177.     {
  178.         /*allocate memory structure*/
  179.         return new double[nn*nn*nn];
  180.     }
  181.  
  182.     /**resets data, assumes uniform 3D nn*nn*nn mesh*/
  183.     static void initData1D(double x[])
  184.     {
  185.         int i,j,k;
  186.  
  187.         /*set everything to zero*/
  188.         for (i=0;i<nx;i++)
  189.             for (j=0;j<ny;j++)
  190.                 for (k=0;k<nz;k++)
  191.                 {
  192.                     x[IJKtoU(i,j,k)]=0;
  193.                 }
  194.  
  195.         /*set default value of 100 on x=0 plane*/
  196.         for (j=0;j<ny;j++)
  197.             for (k=0;k<nz;k++)
  198.             {
  199.                 x[IJKtoU(0,j,k)]=100;
  200.             }
  201.     }
  202.  
  203.     /**flattens i,j,k index, u = i*(ny*nz)+j*(nz)+k*/
  204.     static int IJKtoU(int i, int j, int k)
  205.     {
  206.         return i*(ny*nz) + j*nz + k;
  207.     }
  208.  
  209.     /**returns node offsets for a standard finite difference stencil*/
  210.     static int[] nodeOffsets()
  211.     {
  212.         int node_offsets[] = new int[6];
  213.         node_offsets[0]=IJKtoU(-1,0,0);
  214.         node_offsets[1]=IJKtoU(+1,0,0);
  215.         node_offsets[2]=IJKtoU(0,-1,0);
  216.         node_offsets[3]=IJKtoU(0,+1,0);
  217.         node_offsets[4]=IJKtoU(0,0,-1);
  218.         node_offsets[5]=IJKtoU(0,0,+1);
  219.         return node_offsets;
  220.     }
  221.  
  222.     /**saves 3D mesh in the Tecplot format*/
  223.     static void output3D(String file_name, double x3[][][])
  224.     {
  225.         PrintWriter pw = null;
  226.         try{
  227.             pw = new PrintWriter(new FileWriter(file_name));
  228.         }
  229.         catch (Exception e)
  230.         {
  231.             System.err.println("Failed to open output file "+file_name);
  232.         }
  233.  
  234.         pw.println("VARIABLES = i j k X");
  235.         pw.printf("ZONE I=%d J=%d K=%dn",nx,ny,nz);
  236.  
  237.         for (int i=0;i<nx;i++)
  238.             for (int j=0;j<ny;j++)
  239.                 for (int k=0;k<nz;k++)
  240.                     pw.printf("%d %d %d %gn",i,j,k,x3[i][j][k]);
  241.         pw.close();
  242.     }
  243.  
  244.     /**saves flat 1D mesh in the Tecplot format*/
  245.     static void output1D(String file_name, double x1[])
  246.     {
  247.         PrintWriter pw = null;
  248.         try{
  249.             pw = new PrintWriter(new FileWriter(file_name));
  250.         }
  251.         catch (Exception e)
  252.         {
  253.             System.err.println("Failed to open output file "+file_name);
  254.         }
  255.  
  256.         pw.println("VARIABLES = i j k X");
  257.         pw.printf("ZONE I=%d J=%d K=%dn",nx,ny,nz);
  258.  
  259.         for (int i=0;i<nx;i++)
  260.             for (int j=0;j<ny;j++)
  261.                 for (int k=0;k<nz;k++)
  262.                     pw.printf("%d %d %d %gn",i,j,k,x1[IJKtoU(i,j,k)]);
  263.         pw.close();        
  264.     }
  265. }

You can also download the source code. And do not hesitate to contact us if you have an old code that could benefit from some optimization. Here at PIC-C, we have many years of experience analyzing and optimizing simulation codes and will gladly apply our experience towards your problem.

References

  1. Class notes, Virginia Tech CS4414, “Issues in Scientific Computing”, taught by Adrian Sandu, ~2003
  2. Wadleigh, K.R., and Crawford, I.L., “Software Optimization for High Performance Computing: Creating Faster Applications”, Prentice Hall, 2000

12 comments to “Code Optimization: Speed up your code by rearranging data access”

  1. John
    April 24, 2012 at 11:01 am

    What is the operation count of Gauss-Seidel for 1D, 2D and 3D matrices of Laplace equation. Using Ordinary Elimination to solve the problem requires order N in 1D and N^4 in 2D and N^7 in 3D ( order * bandwidth^2 = N^2 * (N^2)^2 = N^7 ). The Fast Poisson Solver or FFT requires N^2 * log2(N) operations in 2D and I think N^3 * log2(N^2) in 3D ( order * log2(bandwidth) ). So for a 3D matrix with N=100, using the FFT would would be N^4 / log2(N^2) faster or over 7 Million times faster than ordinary elimination. I wonder how much faster it would be compared to 3D Gauss-Seidel. Here is an MIT lecture on Fast Poisson Solver on a square or cubed shape.

    http://ocw.mit.edu/courses/mathematics/18-086-mathematical-methods-for-engineers-ii-spring-2006/video-lectures/lecture-20-fast-poisson-solver/

    and the corresponding book chapter

    http://www.myoops.org/twocw/mit/NR/rdonlyres/Mathematics/18-086Spring-2005/EC160E25-0F93-4D53-A297-4FDDD4E44AC6/0/am72.pdf

  2. John
    April 24, 2012 at 1:10 pm

    Correction, the FFT operation count is Mlog2(M) . So for 1D let M = N, for 2D M = N*N and for 3D M = N*N*N.

    In 2D the FFT operation count is

    N^2 * log2(N^2) = 2 * N^2 * log2(N)

    In 3D the FFT operation count is

    N^3 * log2(N^3) = 3 * N^3 * log2(N)

    For a 3D mesh with N = 64 the FFT speedup over Ordinary Elimination is N^7 / (3*N^3*log2(N)) = 932067 times faster.

    For a 3D mesh with N = 128 the FFT is 12782640 times faster than ordinary elimination algorithm.

    I wonder how much faster the FFT or Fast Poisson Solver will be than using Gauss-Seidel in 3D. It would be interesting to implement it to compare and see what the effect of cache misses has on it.

    • April 24, 2012 at 2:42 pm

      John, from my own experience, such highly optimized matrix solvers are not all that usable in practice. Especially when you consider algorithms such as the particle in cell. In PIC you end up doing bunch of other things besides solving matrixes, such as pushing particles. Often, the time spent pushing particles exceeds the matrix solver by an order of magnitude. So even if you get a super optimized solver, you are still buying just so much. My take on code optimization is to understand the basics (such as memory access) and apply these lessons across the board. I think that in general this works better than focusing all your energy on a single component as folks who write these super optimized solvers do.

    • April 24, 2012 at 3:03 pm

      But I agree, Gauss Seidel is not a very good algorithm. In my codes I generally use a diagonally-preconditioned conjugate gradient. I didn’t use it in this article since the algorithm is more complicated. A PCG solver can often converge in sqrt(n) steps which is a huge improvement over the n*log(n) steps (or so) needed by GS. The beauty of GS is that it’s very simple to implement and can thus also be used to verify more complicated solvers such as the PCG.

  3. John
    April 24, 2012 at 10:15 pm

    I looked it up and iterative solvers like GS have an operation count of

    count = s * N * M

    where

    M is the number of iterations required to converge.
    N is the number of unknowns.
    s is the number of nonzero coefficients per equation.

    Using the above symbols the FFT operation count is N * log2(N). So the speedup of using FFT over GS would be s*M / log2(N). Taking s to be 1 and M = 500, N = 100^3, then FFT is 25 times faster than GS.

    When you have a rectangle shape in 2D or box shape in 3D then I think the FFT method is supposed to be the fastest solver.

    I found the operation count for iterative solvers in table 3.1 in the link below.

    http://books.google.ca/books?id=oDo3LqUa6bgC&pg=PA30&lpg=PA30&dq=Gauss+Seidel+operation+count&source=bl&ots=6S1v36x-Qf&sig=vBZoHG6u1ZXIlXmW-iXHOxKCLmI&hl=en&sa=X&ei=b3-XT4LBMrHMiQLi-q0S&ved=0CFIQ6AEwBg#v=onepage&q=Gauss%20Seidel%20operation%20count&f=false

    • April 25, 2012 at 3:56 am

      This shows you just how important selecting the right algorithm is. The algorithm should be always the first thing to consider when optimizing your code. There are generally always multiple ways of getting something done, and some will be inherently faster than others. Only after you pick the fast algorithm you should start playing with rewriting memory access, unrolling loops, and so on…

      By the way, that link didn’t work for me. It says the page is not available. I think Google Books picks randomly which pages show up and I guess this wasn’t one of them.

  4. John
    April 25, 2012 at 8:03 am

    The link works fine for me each time. Anyway it is section 3.4 Operation Counts in the book if you want to look up the book in your library. In it they provide a couple of tables of operation counts in 1D, 2D and 3D for inversion of a matrix, LU decomposition and iterative methods. They also say that the value of “s” in the formula I gave is 3 for 1D , 5 for 2D and 7 for 3D. This matches your equation for 3D because you are doing 6 adds and 1 multiply for each unknown in the equation above your figure 1 in this blog.

    They go on to say this about 3D.

    “Many important applications today are fundamentally 3-D, and memory requirements alone are making iterative methods necessary. There is a big window of oportunity in the 3-D iteration count, from n^7 (the operation count for 3-D LU) to Mn^3. Finding practical iterative methods which have M ~ n^4 is a major frontier in 3-D. Recall that N = n^3 for 3D; so a practical 3D target is achieving M ~ N^(4/3) .”

  5. Chaudhury
    May 2, 2012 at 2:01 pm

    I have a question – Typically in a PIC code how much efficiency (speed up) can be achieved by using single precision instead of double precision and how much will I compromise on accuracy. Is there some typical estimation on this….or some paper.

    • May 2, 2012 at 2:18 pm

      You know, I am not sure how much of this still holds. There are basically two reasons why single precision would give you a faster code. First, if I am not mistaken, older CPUs were optimized to deal with single precision numbers, and as such operations on doubles were inherently slower. I think this is no longer the case. However, the second speed up still exists and is related to the point of this article. If you use singles, you will end up using half the memory space, and as such, you’ll be able to fit more data in the cache, resulting in faster memory access. And of course you can also run larger cases.

      We generally use doubles now. I’ll look into the effects in a future article, it’s definitely interesting. Thanks for the suggestion!

      The problem with single precision is that you can resolve only about 7 orders of magnitude difference. Although this may seem like a lot, it is quite easy to get to this threshold when for instance scattering particle charges to the grid, especially if you have variable particle weights. Just after scattering few massive particles, scattering a large number of small particles may not increase the charge density any further. The solution would then be to scatter the light particles first, however, that is often not practical.

  6. Chaudhury
    May 2, 2012 at 2:36 pm

    Thanks Ludos for the reply, I agree with your comments……..I am looking forward to your answer for a typical test case just to have a quantitative idea about the gain in speed.

    • Chaudhury
      May 15, 2012 at 3:23 am

      I performed this test for my PIC code for both the cases – with single precision and double precision. For the particular problem I tested single precision code took 80% of the time taken by a double precision code. So typically we can say single precision codes should be around 20-25% faster. However definitely there is an accuracy issue, so it is always better to use a double precision version.

      • May 15, 2012 at 5:41 pm

        Thanks for the info!

Leave a Reply

You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre lang="" line="" escaped="" cssfile=""> In addition, you can use \( ...\) to include equations.