Roy Longbottom at Linkedin Roy Longbottom's Raspberry Pi & Raspberry Pi 2 Benchmarks


Contents


General Raspberry Pi System Standards/Configuration Details
Whetstone Benchmark Whetstone Comparisons Dhrystone 2 Benchmark
Dhrystone 2 Comparisons Linpack Benchmark Linpack Comparisons
Livermore Loops Benchmark Livermore Loops Comparisons Livermore Loops Stability Test
Memory Speed Benchmark MemSpeed Comparison Bus Speed Benchmark
BusSpeed Comparison Java Benchmarks Java Whetstone Benchmarks
Java Whetstone Comparison JavaDraw Benchmark JavaDraw Comparison
OpenGL ES Benchmark OpenGL ES Comparison OpenGL GLUT Benchmark
OpenGL GLUT Comparison DriveSpeed Benchmark DriveSpeed Comparison
DriveSpeed F2FS Format Copying F2FS Files LAN/WiFi Benchmark 1
LAN/WiFi Benchmark 2 LAN/WiFi Comparison Single Core NEON Benchmarks
Linpack NEON Benchmarks NEON Float & Integer Benchmark NEON MemSpeed Benchmark
NEON Maximum 1 Core MFLOPS MultiThreading Benchmarks NEON MultiThreading Benchmarks
FFT Benchmarks Temperature & MHz Recorder Reliability Tests
Performance Monitor Assembly Code


General

Roy Longbottom’s PC Benchmark Collection comprises numerous FREE benchmarks and reliability testing programs, for processors, caches, memory, buses, disks, flash drives, graphics, local area networks and Internet. Original ones run via DOS and later versions under all varieties of Windows. Most have also been converted to run under Linux on PCs. and many to run via Android on tablets and phones. Some of the Linux variety C/C++ source code was changed slightly to compile for execution on the Raspberry Pi.

After reading that compilation time on the Raspberry Pi was painfully slow, the programs were compiled on a Linux Ubuntu 12.04 based PC via Rasbian Toolchain using instructions downloaded from www.xappsoftware.com. This allows programs to be compiled from a Terminal window. Using this, the C/C++ code can be firstly compiled to run on the Linux driven PC, then transferred to the Raspberry Pi via LAN or a USB flash drive. In order to execute after transferring, a change to Properties, Permissions is needed to make executable. One complication is that setting the path to the cross compiler did not work as suggested by xappsoftware. Below are examples of commands used for the two executable files - note the path for gcc:

   cc  whets.c cpuidc.c -lm -O3 -o whetstoneIL

  ~/toolchain/raspbian-toolchain-gcc-4.7.2-linux32/bin/arm-linux-gnueabihf-gcc whets.c 
      cpuidc.c -lm -O3 -march=armv6 -mfloat-abi=hard -mfpu=vfp -o whetstonePiA6

  Command to execute -  ./whetstonePiA6
  
The last three parameters (-march to -mfpu) made no difference to performance, but others are likely to be needed to take advantage of later ARM floating point functions. Note, the first four benchmark programs were compiled later on the Raspberry Pi itself. Both the above cc and gcc (with no Toolchain path) commands were used for compilation. These and the PC based files all produced the same numeric results and mainly the same performance. Compilation time was acceptable at between 8 and 36 seconds.

The benchmarks and source codes can be downloaded in Raspberry_Pi_Benchmarks.zip. This includes the executables compiled, as above, to run on Intel CPUs via Linux and the versions compiled on the Raspberry Pi. To download the benchmarks, click on the Raspberry_Pi_Benchmarks.zip link, select Save to download to Home (assume /home/pi). Open File Manager and right click on zip file and select Extract here.

To enable execution of the programs, a security setting is required. Double click on Raspberry_Pi_Benchmarks folder to open, right click on each executable (dhrystonePiA6, linpackPiA6, linpackPiSP, liverloopsPiA6, memspeedPiA6, whetstonePiA6), select Properties, Permissions, tick Make the file executable. The new program titles mainly end in PiA7.

To run, open LX Terminal, type cd Raspberry_Pi_Benchmarks to enter the directory, type ls to ensure the path is correct and to list files, then execute for example using ./dhrystonePiA6. Information will be displayed as the benchmarks are running and results will be saved in log files, example Dhry.txt.


To Start


Raspberry Pi System

For those who do not know, the Raspberry Pi has a 3.5 x 2.5 inch motherboard, in this case, containing a 700 MHz ARM 1176JZF v6 single core CPU and 512 MB RAM. External connectors include two full size USB sockets with others for a full size HDMI plug, a micro USB socket for power, an RJ45 Ethernet port and a slot for an SD card, used as the main drive.

The operating system is Raspbian, based on Linux Debian, in ths case Wheezy-Raspbian. This can be obtained pre-loaded on an SD card or downloaded from raspberrypi.org and copied to an SD card to produce a bootable drive. I used Image Writer for Microsoft Windows for this purpose.

In my case, booting time, from connecting power to desktop display, is 30 seconds. Using a simple command (see below) produces a menu where CPU speed can be selected up to 1 GHz, also increasing memory bus speed.

Raspberry Pi 2 Model B has a 900 MHz quad core Broadcom BCM2836 ARM V7 CPU with 1 GB RAM and can be overclocked to 1 GHz, using the configuration menu. L1 data cache size is 32 KB and L2 cache 512 KB, shared by all cores. Existing benchmarks were run on the new computer along with additional programs, produced by a newer compiler, to see if additional hardware features were used. The additional benchmarks were produced using gcc 4.8, where a typical compile command is:

 gcc whets.c cpuidc.c -lm -O3 -mcpu=cortex-a7 -mfpu=neon-vfpv4 -mfloat-abi=hard -o newA7


To Start


Standards/Configuration Details

All the benchmarks are run from Terminal commands and provide continuous displays of current activity. This was included in original versions of the benchmarks when CPUs were really slow. They all produce a summary of results in a .txt based log file and this includes system information, where the following example is for my particular system. Note that this includes the meaningless BogoMIPS measurement that does not change when the processor is overclocked. Raspberry Pi 2 has additional features such as neon, vfpv3 and vfpv4.

The programs provide keyboard input at the end to include comments in the log, such as "overclocked at 1000 MHz". The source code has expected numeric answers, selected for particular hardware. These are checked for correctness and errors reported in the log. Running on a variation of the hardware could produce false error reports for floating point calculations.

Also shown below is the command to select the menu with the overclocking option and commands to obtain CPU MHz and these do not change when the CPU is overclocked.


 SYSTEM INFORMATION

 From File /proc/cpuinfo
 Processor	: ARMv6-compatible processor rev 7 (v6l)
 BogoMIPS	: 464.48 was   #371 PREEMPT 
 BogoMIPS	: 697.95 later #557 PREEMPT
 Features	: swp half thumb fastmult vfp edsp java tls 
 CPU implementer	: 0x41
 CPU architecture: 7
 CPU variant	: 0x0
 CPU part	: 0xb76
 CPU revision	: 7
 Hardware	: BCM2708
 Revision	: 000d
 Serial		: 00000000db690cb4
 
 From File /proc/version
 Linux version 3.6.11+ (dc4@dc4-arm-01) (gcc version 4.7.2 20120731 (prerelease) 
       (crosstool-NG linaro-1.13.1+bzr2458 - Linaro GCC 2012.08) ) #371 PREEMPT 
       Thu Feb 7 16:31:35 GMT 2013

 ####################################################

 Raspberry Pi 2

processor	: 0, 1, 2 and 3
model name	: ARMv7 Processor rev 5 (v7l)
BogoMIPS	: 38.40
Features	: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva
                  idivt vfpd32 lpae evtstrm 
CPU implementer	: 0x41
CPU architecture: 7
CPU variant	: 0x0
CPU part	: 0xc07
CPU revision	: 5

 Linux version 3.18.5-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.8.3 20140303 (prerelease)
      (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) ) #225 SMP PREEMPT 
      Fri Jan 30 18:53:55 GMT 2015

 ####################################################

 Commands to obtain CPU MHz

 vcgencmd measure_clock arm
 frequency(45)=700074000

 cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
 700000

 Command for overclocking selection

 sudo raspi-config

   


To Start


Whetstone Benchmark - whetstonePiA6 and whetstonePiA7

The Whetstone Benchmark was the first general purpose benchmark that set industry standards of performance, particularly for minicomputers, and introduced in 1972. The benchmark produced speed ratings in terms of Thousands of Whetstone Instructions Per Second (KWIPS). In 1978, self timing versions (by yours truly) produced speed ratings, for each of the eight test procedures, in MOPS (Millions of Operations Per Second) or MFLOPS (Millions of Floating Point Operations Per Second), with an overall rating in MWIPS, mainly dependent on floating point speed.

Unlike some other floating point benchmarks, the new PiA7 compilation produces identical numeric results to those below.

Besides the logged results, other information, shown below, is displayed on the Terminal, particularly for calibrating to run for a total of about 10 seconds. The time for each test identifies what determines the overall MWIPS rating. It now depends on those with mathematical functions but was N6 floating point originally.



pi@raspberrypi ~/benchmarks $ ./whetstonePiA6

##########################################
Single Precision C Whetstone Benchmark Opt 3 32 Bit, Sun May 12 11:05:53 2013

Calibrate
       0.04 Seconds          1   Passes (x 100)
       0.19 Seconds          5   Passes (x 100)
       0.93 Seconds         25   Passes (x 100)
       4.68 Seconds        125   Passes (x 100)

Use 267  passes (x 100)

          Single Precision C/C++ Whetstone Benchmark

Loop content                  Result              MFLOPS      MOPS   Seconds

N1 floating point      -1.12475013732910156        97.811               0.053
N2 floating point      -1.12274742126464844       100.800               0.360
N3 if then else         1.00000000000000000                 698.625     0.040
N4 fixed point         12.00000000000000000                 425.250     0.200
N5 sin,cos etc.         0.49911010265350342                   5.850     3.840
N6 floating point       0.99999982118606567        85.669               1.700
N7 assignments          3.00000000000000000                 498.960     0.100
N8 exp,sqrt etc.        0.75110864639282227                   2.722     3.690

MWIPS                                             270.460               9.983

A new results file, whets.txt,  will have been created in the same
directory as the .EXE files, if one did not already exist.

Type additional information to include in whets.txt - Press Enter

   


To Start


Whetstone Benchmark Comparisons

Results below are for the Raspberry Pi running at 700 MHz and overclocked at 1000 MHz. For comparison purposes, also shown are speeds obtained on various Android based ARM CPUs and Intel processors running under Linux, compiled as above. The latter are similar to those from my earlier Linux benchmarks. Results on many more systems are in Whetstone Results.htm with speeds of ancient computers in Whetstone Benchmark History and Results.

Raspberry Pi 2, with default settings, is just over twice as fast as the original, on average, or 57% faster at 1000 MHz. Performance via gcc 4.8 can be slightly slower than the earlier benchmarks. The programming code used is not really suitable to produce performance gains through advanced instructions.


 System        MHz  MWIPS  ------MFLOPS-------   ------------MOPS---------------
                             1      2      3     COS   EXP  FIXPT      IF  EQUAL

 Raspberry Pi  700  270.5   97.8  100.8   85.7   5.9   2.7  425.3   698.6  499.0
 Raspberry Pi 1000  390.6  136.8  146.3  122.9   8.5   3.9  617.4  1014.3  804.9 

 RPi 2         900  525.0  252.0  261.3  223.0  10.2   5.1 1102.5  1358.4  882.0
 RPi 2        1000  584.6  280.3  290.7  248.0  11.3   5.7 1314.0  1208.9  981.1
 gcc 4.8
 RPi 2         900  507.0  250.4  227.1  184.6  10.1   5.1 1113.7  1334.9  668.4
 RPi 2        1000  568.4  280.4  254.4  206.7  11.3   5.7 1248.8  1497.9  749.2

 ARM 926EJ     800   31.2   10.2   10.2   11.4   0.6   0.3   38.8   278.4  219.4
 ARM v7-A9     800  687.4  165.4  149.9  153.4  15.9   9.3  723.1  1082.1  725.3
 ARM v7-A9    1300 1115.0  271.3  250.7  256.4  25.8  14.6 1190.0  1797.0 1198.7
 ARM v7-A15   1700 1333.6  315.5  291.2  298.6  39.8  18.1 1394.7  2089.9 1395.5 

 Intel Atom   1666  822.3  332.4  325.7  308.6  17.2   8.1 1013.8  2368.9 1228.0
 Core 2       2400 2316.1  810.0  790.4  576.2  56.8  23.8 3986.9  7532.4 2831.4
 Core i7      3900 3959.0 1331.0 1330.9  938.4  96.5  42.1 6515.7 10966.7 5850.8
   


To Start


Dhrystone 2 Benchmark - dhrystonePiA6 and dhrystonePiA7

The Dhrystone "C" benchmark provides a measure of integer performance (no floating point instructions). It became the key standard benchmark from 1984, with the growth of Unix systems. The first version was produced by Reinhold P. Weicker in ADA and translated to "C" by Rick Richardson. Two versions are available - Dhrystone versions 1.1 and 2.1. The second version, used here, was produced to avoid over-optimisation problems encountered with version 1, but some is still possible. Speed was originally measured in Dhrystones per second. This was later changed to VAX MIPS by dividing Dhrystones per second by 1757, the DEC VAX 11/780 result, the latter being regarded as the first 1 MIPS minicomputer.

This again runs for 10 seconds after calibration. In this case, logged results are nanoseconds one Dhrystone run, Dhrystones per Second and VAX MIPS rating plus details of detected errors or “Numeric results were correct”. Below is the execution command and details of displayed information, excluding standard system information.



pi@raspberrypi ~/benchmarks $ ./dhrystonePiA6

##########################################

Dhrystone Benchmark, Version 2.1 (Language: C or C++)

Optimisation    Opt 3 32 Bit
Register option not selected

       10000 runs   0.00 seconds 
      100000 runs   0.07 seconds 
      200000 runs   0.15 seconds 
      400000 runs   0.28 seconds 
      800000 runs   0.56 seconds 
     1600000 runs   1.13 seconds 
     3200000 runs   2.26 seconds 

Final values (* implementation-dependent):

Int_Glob:      O.K.  5  Bool_Glob:     O.K.  1
Ch_1_Glob:     O.K.  A  Ch_2_Glob:     O.K.  B
Arr_1_Glob[8]: O.K.  7  Arr_2_Glob8/7: O.K.     3200010
Ptr_Glob->              Ptr_Comp:       *    5722488
  Discr:       O.K.  0  Enum_Comp:     O.K.  2
  Int_Comp:    O.K.  17 Str_Comp:      O.K.  DHRYSTONE PROGRAM, SOME STRING
Next_Ptr_Glob->         Ptr_Comp:       *    5722488 same as above
  Discr:       O.K.  0  Enum_Comp:     O.K.  1
  Int_Comp:    O.K.  18 Str_Comp:      O.K.  DHRYSTONE PROGRAM, SOME STRING
Int_1_Loc:     O.K.  5  Int_2_Loc:     O.K.  13
Int_3_Loc:     O.K.  7  Enum_Loc:      O.K.  1  
Str_1_Loc:                             O.K.  DHRYSTONE PROGRAM, 1'ST STRING
Str_2_Loc:                             O.K.  DHRYSTONE PROGRAM, 2'ND STRING

 Nanoseconds one Dhrystone run:       671.88
 Dhrystones per Second:              1488372
 VAX MIPS rating =                    847.11

Type additional information to include in Dhry.txt - Press Enter

   


To Start


Dhrystone 2 Benchmark Comparisons

Below is a similar combination of results as for the Whetstone Benchmark. For results on other systems see Dhrystone Results.htm. Unlike with Whetstones, using floating point calculations, the Raspberry Pi CPU speed is close to ARM Cortex-A9 processors, on a per MHz basis, but executing integer functions. The Raspberry Pi 2 is faster than the first version, performance ratios being shown below. The new gcc 4.8 compilation provides slightly higher speed ratings.


   System          MHz  VAX MIPS 
  

   Raspberry Pi    700     847
   Raspberry Pi   1000    1226

   Raspberry Pi 2  900    1538  1.82 x Rpi  700
   Raspberry Pi 2 1000    1694  1.38 x RPi 1000  
   gcc 4.8
   Raspberry Pi 2  900    1667  1.08 x RPi 2
   Raspberry Pi 2 1000    1852  1.09 x RPi 2

   Android

   ARM 926EJ       800     356
   ARM v7-A9       800     962
   ARM v7-A9      1300    1610
   ARM v7-A15     1700    3189

   Linux using CC

   Intel Atom     1666    2629
   Core 2         2400    6857

   Linux using older GCC

   Intel Atom     1666    2055
   Core 2         2400    5582
   Core i7        3900   16356
   


To Start


Linpack Benchmark - linpackPiA6 linpackPiSP linpackPiA7 linpackPiA7SP

The Linpack Benchmark was produced from the "LINPACK" package of linear algebra routines. It became the primary benchmark for scientific applications, particularly under Unix, from the mid 1980's, with a slant towards supercomputer performance. The original double precision C version, used here, operates on 100x100 matrices. Performance is governed by an inner loop in function daxpy() with a linked triad dy[i] = dy[i] + da * dx[i], and is measured in Millions of Floating Point Operations Per Second (MFLOPS).

Displayed output is the same as the original version for PCs (My conversion at Netlib - 1996), where the bloated detail was needed due to using a low resolution timer. The line starting with norm resid 1.7 shows the numeric results of calculations. These can vary using different hardware and compilers - see examples in Linpack numeric results Android. For comparison purposes, these are set in the C source code and checked at run time, a “Numeric results were as expected” message being logged if correct, or details provided if incorrect. Note that the compiled code could give consistent different results on other Linux based ARM processors. The log file shows only one MFLOPS speed.

Unlike normal Intel floating point, double precision calculations are often slower than those using single precision on ARM processors. So, besides linpackPiA6, a single precision compilation, linpackPiSP, is also provided. As for the double precision results, these are identical to those on Android based ARM systems.

The gcc 4.8 equivalents are linpackPiA7 and linpackPiA7SP where, as shown below, these produce different numeric answers. These are probably acceptable and due to different rounding with the assembly code used. Below is that used for the performance dependent code.



pi@raspberrypi ~/benchmarks $ ./linpackPiA6 

##########################################
Unrolled Double Precision Linpack Benchmark - Linux Version in 'C/C++'

Optimisation Opt 3 32 Bit

norm resid      resid           machep         x[0]-1          x[n-1]-1
   1.7    7.41628980e-14   2.22044605e-16  -1.49880108e-14  -1.89848137e-14

Times are reported for matrices of order          100
1 pass times for array with leading dimension of  201

      dgefa      dgesl      total     Mflops       unit      ratio
    0.00000    0.00000    0.00000       0.00     0.0000     0.0000

Calculating matgen overhead
        10 times   0.01 seconds
       100 times   0.15 seconds
       200 times   0.28 seconds
       400 times   0.58 seconds
       800 times   1.13 seconds
Overhead for 1 matgen      0.00141 seconds

Calculating matgen/dgefa passes for 1 seconds
        10 times   0.17 seconds
        20 times   0.35 seconds
        40 times   0.69 seconds
        80 times   1.38 seconds
Passes used         57 

Times for array with leading dimension of 201

      dgefa      dgesl      total     Mflops       unit      ratio
    0.01578    0.00053    0.01631      42.11     0.0475     0.2912
    0.01596    0.00053    0.01648      41.66     0.0480     0.2943
    0.01578    0.00053    0.01631      42.11     0.0475     0.2912
    0.01596    0.00053    0.01648      41.66     0.0480     0.2943
    0.01578    0.00070    0.01648      41.66     0.0480     0.2943
Average                                41.84

Calculating matgen2 overhead
Overhead for 1 matgen      0.00144 seconds

Times for array with leading dimension of 200

      dgefa      dgesl      total     Mflops       unit      ratio
    0.01523    0.00053    0.01576      43.58     0.0459     0.2813
    0.01540    0.00053    0.01593      43.10     0.0464     0.2845
    0.01540    0.00053    0.01593      43.10     0.0464     0.2845
    0.01523    0.00070    0.01593      43.10     0.0464     0.2845
    0.01523    0.00070    0.01593      43.10     0.0464     0.2845
Average                                43.20

Unrolled Double  Precision       41.84 Mflops

Type additional information to include in linpack.txt - Press Enter

Raspberry Pi Results of Calculations

norm resid resid x[0]-1 x[n-1]-1 DP Pi 1.7 7.41628980e-14 -1.49880108e-14 -1.89848137e-14 DP Pi 2 1.9 8.46778499E-14 -1.11799459E-13 -9.60342916E-14 SP Pi 1.6 3.80277634e-05 -1.38282776e-05 -7.51018524e-06 SP Pi 2 2.0 4.69621336E-05 -1.31130219E-05 -1.30534172E-05


To Start


Linpack Benchmark Comparisons

The first Raspberry Pi results do not look too good but they would on a cost/performance basis. Also, the MFLOPS ratings should be compared with Linpack Results on PCs and older mainframes, supercomputers, Unix boxes and minicomputers with Netlib Linpack Results. The Linpack benchmark depends on data in L2 cache and this might lead to variations in running time. Other versions might specify larger array sizes (like 1000 x 1000) that can depend on slower memory.

The Raspberry Pi 2 is faster than the first version, performance ratios being shown below. In this case, the new code from from gcc 4.8 is faster than the original, but only for the double precision benchmark, due to the more efficient instructions shown below. The benchmark has also been compiled to use ARM NEON Single Instruction Multiple Data (SIMD) functions, speed being included in the results table. Further details are in a later section.

         
                                      MFLOPS              GAIN
   System          MHz         DP      SP   NEON SP    DP      SP   Against
  

   Raspberry Pi    700         42      58     N/A
   Raspberry Pi   1000         68      88     N/A

   Raspberry Pi 2  900        120     156     N/A    2.86    2.69   RPi  700
   Raspberry Pi 2 1000        134     175     N/A    1.97    1.99   RPi 1000
   gcc 4.8
   Raspberry Pi 2  900        154     156     300    1.28    1.00   RPi 2  900
   Raspberry Pi 2 1000        169     176     334    1.26    1.01   RPi 2 1000

  Android

   ARM 926EJ       800          6      10     N/A
   ARM v7-A9       800        101     129     256
   ARM v7-A9      1300        151     201     377
   ARM v7-A15     1700        459     803    1335
   gcc 4.8
   ARM v7-A9      1300        159     200
   ARM v7-A15     1700        795     977

   Linux using CC

   Intel Atom     1666        211
   Core 2         2400       1631

   Linux using older GCC

   Intel Atom     1666        196
   Core 2         2400       1288
   Core i7        3900       2534

   


To Start


Livermore Loops Benchmark - liverloopsPiA6 liverloopsPiA7

This original main benchmark for supercomputers was first introduced in 1970, initially comprising 14 kernels of numerical application, written in Fortran. This was increased to 24 kernels in the 1980s. Performance measurements are in terms of Millions of Floating Point Operations Per Second or MFLOPS. The kernels are executed three times with different double precision data array sizes. Following are overall MFLOPS results for various systems, geometric mean being the official average performance. [Reference - F.H. McMahon, The Livermore Fortran Kernels: A Computer Test Of The Numerical Performance Range, Lawrence Livermore National Laboratory, Livermore, California, UCRL-53745, December 1986]

                    ---------------- MFLOPS ---------------               
CPU            MHz  Maximum Average Geomean Harmean Minimum   Measured in

CDC 6600        10     1.1     0.5     0.5     0.4     0.2      1970  *  
CDC 7600        36.4   7.3     4.2     3.9     2.5     1.4      1974  *  
Cray 1A         80    83.5    25.8    14.4     7.9     2.7      1980  *  
Cray 1S         80    82.1    22.2    11.9     6.5     1.0      1985     
CDC Cyber 205   50   146.9    36.4    14.6     5.0     0.6      1982  *  
Cray 2         244   146.4    36.7    14.2     5.8     1.7      1985     
Cray XMP1      105   187.8    61.3    31.5    15.6     3.6      1986     

                        * Fewer than 24 Kernels                          

Below is the run command, then displayed calibration phase, final results and details for the 24 loops using the largest data sizes. Calibration arranges for each loop to run for around one second. The Checksums OK column is an indication of accuracy, compared with a specification and probably based on results from CDC 6600 and 7600. These hardware/compiler dependent numeric answers are checked as in the Linpack benchmark. Results included in the log file are Minimum, Maximum, Averages and 24 weighted average MFLOPS speeds.

As with the Linpack benchmark, liverloopsPiA7, the gcc 4.8 compilation, produced different numeric answers to the earlier version, this time for 22 out of the 24 kernels. All were only slightly different and are shown below, for part 3 of 3. The benchmark produced a run time error from the initial gcc 4.8 compilation. This was due to the way in which shared array space is allocated and was also apparent with earlier Android compilations. So, the same code changes were made and the revised source code is included in Raspberry_Pi_Benchmarks.zip.



pi@raspberrypi ~/benchmarks $ ./liverloopsPiA6

##########################################

L.L.N.L. 'C' KERNELS: MFLOPS   P.C.  VERSION 4.0

Optimisation  Opt 3 32 Bit

Calculating outer loop overhead
      1000 times   0.00 seconds
     10000 times   0.00 seconds
    100000 times   0.00 seconds
   1000000 times   0.06 seconds
   2000000 times   0.11 seconds
   4000000 times   0.23 seconds
Overhead for each loop   5.7500e-08 seconds


Calibrating part 3 of 3

Loop count         32  0.00 seconds
Loop count        128  0.01 seconds
Loop count        512  0.04 seconds

Loops  200 x  8 x Passes

Kernel       Floating Pt ops
No  Passes E No    Total      Secs.  MFLOPS Span     Checksums          OK
------------ -- ------------- ----- ------- ---- ---------------------- --
 1  28 x  11  5  6.652800e+07  0.97   68.29   27  3.855104502494961e+01 16
 2  46 x  18  4  5.829120e+07  0.93   62.65   15  3.953296986903059e+01 16
 3  37 x  36  2  1.150848e+08  0.85  135.70   27  2.699309089320672e-01 16
 4  38 x  36  2  6.566400e+07  0.88   75.04   27  5.999250595473891e-01 16
 5  40 x  12  2  3.993600e+07  1.08   36.99   27  3.182615248447483e+00 16
 6  21 x  34  2  5.483520e+07  1.26   43.52    8  1.120309393467088e+00 15
 7  20 x  14 16  1.505280e+08  1.03  146.64   21  2.845720217644024e+01 16
 8   9 x  10 36  1.347840e+08  1.08  124.52   14  2.960543667875005e+03 15
 9  26 x  11 17  1.166880e+08  1.27   92.17   15  2.623968460874250e+03 16
10  25 x  10  9  5.400000e+07  1.16   46.59   15  1.651291227698265e+03 16
11  46 x  18  1  3.444480e+07  1.10   31.30   27  6.551161335845770e+02 16
12  48 x  14  1  2.795520e+07  1.13   24.66   26  1.943435981130448e-06 16
13  31 x   9  7  2.499840e+07  1.19   21.07    8  3.847124199949431e+10 15
14   8 x  11 11  4.181760e+07  1.08   38.63   27  2.923540598672009e+06 15
15   1 x  17 33  6.283200e+07  0.98   64.21   15  1.108997288134785e+03 16
16  14 x  34 10  8.377600e+07  1.41   59.41   15  5.152160000000000e+05 16
17  26 x  17  9  9.547200e+07  1.13   84.27   15  2.947368618589361e+01 16
18   2 x  11 44  1.006720e+08  1.16   86.92   14  9.700646212337041e+02 16
19  28 x  23  6  9.273600e+07  1.30   71.56   15  1.268230698051003e+01 15
20   7 x   9 26  6.814080e+07  1.19   57.04   26  5.987713249475302e+02 16
21   1 x   2  2  8.000000e+07  1.51   52.99   20  5.009945671204667e+07 16
22   8 x   8 17  2.611200e+07  1.16   22.42   15  6.109968728263972e+00 16
23   7 x  11 11  8.808800e+07  0.98   89.56   14  4.850340602749970e+02 16
24  23 x  35  1  3.348800e+07  1.17   28.56   27  1.300000000000000e+01 16

                     Maximum   Rate  146.64 
                     Average   Rate   65.20 
                     Geometric Mean   56.66 
                     Harmonic  Mean   48.85 
                     Minimum   Rate   21.07 

                     Do Span     19

                Overall

                Part 1 weight 1
                Part 2 weight 2
                Part 3 weight 1

                     Maximum   Rate  148.29 
                     Average   Rate   64.41 
                     Geometric Mean   54.74 
                     Harmonic  Mean   46.40 
                     Minimum   Rate   16.62 

                     Do Span    167

Type additional information to include in linpack.txt - Press Enter


 gcc 4.8 Different Results

 1 was  3.855104502494985e+01 expected  3.855104502494961e+01
 2 was  3.953296986903406e+01 expected  3.953296986903059e+01
 3 was  2.699309089321338e-01 expected  2.699309089320672e-01
 4 was  5.999250595474085e-01 expected  5.999250595473891e-01
 5 was  3.182615248448323e+00 expected  3.182615248447483e+00
 6 was  1.120309393467610e+00 expected  1.120309393467088e+00
 7 was  2.845720217644064e+01 expected  2.845720217644024e+01
 8 was  2.960543667877653e+03 expected  2.960543667875005e+03
 9 was  2.623968460874436e+03 expected  2.623968460874250e+03
10 was  1.651291227698388e+03 expected  1.651291227698265e+03
11 was  6.551161335846584e+02 expected  6.551161335845770e+02
12 was  1.943435982643127e-06 expected  1.943435981130448e-06
13 was  3.847124173932926e+10 expected  3.847124199949431e+10
14 was  2.923540598700724e+06 expected  2.923540598672009e+06
15 was  1.108997288135077e+03 expected  1.108997288134785e+03
17 was  2.947368618590736e+01 expected  2.947368618589361e+01
18 was  9.700646212341634e+02 expected  9.700646212337041e+02
19 was  1.268230698051755e+01 expected  1.268230698051003e+01
20 was  5.987713249471707e+02 expected  5.987713249475302e+02
21 was  5.009945671206671e+07 expected  5.009945671204667e+07
22 was  6.109968728264851e+00 expected  6.109968728263972e+00
23 was  4.850340602751729e+02 expected  4.850340602749970e+02


   


To Start


Livermore Loops Benchmark Comparisons

For Cray 1 comparison purposes, it is more appropriate to use Cray 1S results, as these are from running all 24 kernels. Geometric mean for this system is 11.9 MFLOPS. In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1.

Average performance gains of the Raspberry Pi 2 are not as high as those for the Linpack benchmark, but the best test loop, at 900 MHz, is 4.25 times faster than the original Pi at 700 MHz. Highest average of 138 MFLOPS is 11.6 times faster than a Cray 1.

See also Livermore Loops Results on PCs.

   
                                                                  Compare
   System          MHz    Maximum Average Geomean Harmean Minimum Geomean Against

   Raspberry Pi     700     148.3    64.4    54.7    46.4    16.6
   Raspberry Pi    1000     216.8    94.8    80.8    68.7    29.3

   Raspberry Pi 2   900     248.0   126.1   114.9   103.9    41.5    2.10  RPi  700
   Raspberry Pi 2  1000     273.5   139.7   127.3   115.2    46.5    1.58  RPi 1000 
   gcc 4.8
   Raspberry Pi 2   900     223.8   136.9   125.6   113.0    42.3    1.09  RPi 2  900
   Raspberry Pi 2  1000     244.9   150.7   138.2   124.4    46.7    1.09  RPi 2 1000    


   Android

   ARM 926EJ        800       9.9     5.6     5.4     5.2     2.4
   ARM v7-A9        800     253.2   129.3   115.3   101.6    46.7
   ARM v7-A9       1200     391.9   202.1   181.3   160.9    68.1
   ARM v7-A15      1700    1252.8   476.0   375.8   288.8    90.8

   Atom Z3745      1866    1031.2   480.0   429.8   378.6   154.7   


   Linux using CC

   Intel Atom      1666     480.3   217.6   189.9   162.2    59.7
   Core 2          2400    2264.7  1039.3   822.9   606.4   161.6

   Linux using older GCC

   Intel Atom      1666     465.2   212.2   185.1   157.4    49.7
   Core 2          2400    2384.9  1038.1   805.8   582.1   161.0
   Core i7 4820K   3900    5551.3  2196.8  1712.4  1286.6   415.3


   

MFLOPS for 24 loops

Raspberry Pi 700 MHz 66.1 79.8 132.8 141.1 23.8 29.3 110.8 129.7 90.2 38.7 32.0 25.2 22.1 16.6 61.0 58.6 81.5 59.8 73.5 42.2 29.9 22.5 66.4 29.5 Raspberry Pi 1000 MHz 97.0 116.2 197.2 206.0 37.4 47.2 169.0 185.6 132.6 57.4 46.2 35.9 32.7 32.0 89.7 85.6 118.4 88.8 107.1 75.6 47.6 32.4 106.0 42.6 Raspberry Pi 2 900 MHz 114.1 129.1 221.7 218.0 84.7 96.8 196.3 248.0 155.2 137.4 74.2 63.6 62.4 70.6 125.6 125.1 196.3 153.3 132.6 115.2 78.4 41.6 166.5 89.0 Raspberry Pi 2 1000 MHz 126.7 143.7 246.8 242.7 94.0 108.2 218.5 273.4 172.7 135.8 82.6 70.8 69.0 78.3 140.1 139.3 218.5 170.7 147.7 128.6 80.0 46.7 184.5 99.1 Raspberry Pi 2 900 MHz gcc 4.8 132.0 163.4 223.8 220.6 85.4 126.3 217.5 212.5 189.9 123.4 99.3 56.0 67.9 83.9 125.0 133.2 202.0 180.8 160.3 125.1 86.3 42.5 185.5 127.5 Raspberry Pi 2 1000 MHz gcc 4.8 139.0 166.2 244.9 243.7 88.1 140.1 232.0 234.5 210.7 136.1 109.1 61.6 74.8 92.8 137.9 147.0 223.1 199.2 177.0 133.8 95.2 47.0 204.6 140.9 Android ARM 926EJ 800 MHz 5.6 6.4 6.2 6.1 4.6 4.9 5.9 6.1 6.0 9.0 5.8 3.9 4.0 3.6 3.8 5.6 7.6 4.5 5.7 4.3 5.2 2.5 5.7 7.4 ARM v7-A9 800 MHz 172.6 127.5 253.2 248.6 71.6 141.2 197.6 190.4 202.3 109.2 55.2 51.2 54.1 51.5 100.0 144.1 192.1 139.4 130.1 105.4 111.2 63.1 136.3 56.8 ARM v7-A9 1200 MHz 241.7 233.4 383.5 388.7 98.4 147.1 293.1 258.5 314.6 181.1 99.1 95.3 80.6 68.1 171.6 226.9 346.2 176.9 202.6 184.9 119.5 102.1 200.9 88.5 Linux using CC Intel Atom 1666 MHz 308 297 480 468 206 175 312 308 406 125 169 140 64 101 122 216 236 195 220 134 188 61 304 94 Core 2 2400 MHz 1952 1302 1583 1527 341 1186 2184 2263 2155 1184 800 795 162 396 371 874 1341 1029 509 384 1597 174 1190 558 Linux using older GCC Intel Atom 1666 MHz 260 250 336 374 167 178 312 306 406 128 168 105 64 99 121 212 228 194 224 134 197 56 304 99 Core 2 2400 MHz 1953 1223 1584 1534 343 1238 2192 2385 2147 1187 795 479 161 396 276 956 1368 959 509 385 1385 165 1182 560


To Start


Livermore Loops Stability Test

A long time ago, the Livermore Loops Benchmark produced wrong numeric results on an overclocked Pentium Pro CPU. A revised benchmark included a run time option to specify the nominal running time of each loop, an example of the 5 seconds per test parameter used here is shown below. With this, the start time of each section is logged and the results of every pass checked for correctness. Run time displays and reported performance are the same as before.

The stability test was run on the Pi at 700 MHz and overclocked 1000 MHz, at 5 seconds per test (see command format), or a total time of 6 minutes. CPU temperature was measured (see measure_temp command) at 30 second intervals. Results are provided below. Room temperature was 22.6°C. At 700 MHz temperature increased from 48.7 to 53.0°C and, higher at 1000 MHz, from 50.3 to 60.5°C.


 Command ./liverloopsPiA6 Secs 5

#####################################################

 Livermore Loops Benchmark Opt 3 32 Bit via C/C++ Fri May 17 15:52:01 2013

 Reliability test   5 seconds each loop x 24 x 3

 Part 1 of 3 start at Fri May 17 15:52:01 2013

 Part 2 of 3 start at Fri May 17 15:54:09 2013

 Part 3 of 3 start at Fri May 17 15:56:24 2013

 Numeric results were as expected

#####################################################
 
   

Temperatures Degrees C using /opt/vc/bin/vcgencmd measure_temp

MHz Minutes 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 700 48.7 50.8 51.4 51.9 51.9 51.9 51.9 51.9 51.9 52.5 53.0 53.0 52.5 1000 50.3 56.8 58.4 57.8 57.8 59.5 59.5 59.5 59.5 59.5 60.5 60.5 59.5


To Start


Memory Speed Benchmark - memspeedPiA6 memspeedPiA7

MemSpeed benchmark measures data reading speeds in MegaBytes per second carrying out calculations on arrays of cache and RAM data, normally sized 2 x 4 KB to 2 x 4 MB. Calculations are as shown in the results’ headings. For the first two double precision tests, speed in Million Floating Point Operations Per Second (MFLOPS) can be calculated by dividing MB/second by 8 and 16. For single precision divide by 4 and 8. A disassembly showed that Millions of [Assembler] Instructions Per Second (MIPS), for the first two integer tests, can be calculated by multiplying MB/second by 0.78 and 0.59. For the three copy tests, MIPS are MB/second times 0.344 for double precision and 0.688 for the other two. These calculations are shown below. Note that the changes in speeds, as data size increases, indicates the size of caches. As different instructions counts are produced with later NEON compilations, MOPS are shown for the first integer test.

The two executables are for Raspberry Pi and memspeedIL for Intel/Linux. Particularly for the latter, the default maximum of 8 MB might be too small to demonstrate RAM speed. For either, a run time parameter is provided to use more memory. These are for up to 128, 256, 512 or 1024 - examples memspeedPiA6 MB 256 and memspeedIL MB 1024.


        Raspberry Pi CPU 700 MHz, Core 400 MHz, SDRAM 400 MHz

     Memory Reading Speed Test 32 Bit Version 4 by Roy Longbottom

               Start of test Mon May 20 10:25:17 2013

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S

       8     538    640    930    602    731   1094   1230    465    465 L1
      16     568    602    787    602    731   1023   1000    426    507
      32     292    256    310    276    262    330   1066    426    547 L2
      64     276    238    276    262    238    292    341    269    284
     128     189    170    193    182    170    200    222    196    204
     256     140    129    142    136    129    144    138    119    124 RAM
     512     138    127    138    134    127    144    131    111    119
    1024     136    127    138    134    127    144    124    111    119
    2048     136    127    138    132    128    144    128    111    121
    4096     136    128    138    134    126    144    128    111    119
    8192     138    127    138    136    127    144    126    111    119

                End of test Mon May 20 10:26:06 2013

 Max MFLOPS   71    160            38     91
 Max MIPS                  725                  645    423    320    320
 Max MOPS                  233
  


To Start


Memory Speed Comparison

The first results below are for the Raspberry Pi at the maximum overclocked settings. The overheads on repetitively running the tests cause variations in speeds of the lower data sizes but average overclocked speed gain, using L1 cache, is 1.41 times, compared with 1.43 times CPU MHz. Average RAM speed gains are 1.53 times, similar to expectations. A surprise is for L2 cache based data, where the average gain is 1.72 times and some speeds appear to be faster than using L1 cache.

Comparing 900 MHz Raspberry Pi 2 results, from gcc 4.8 (PiA7), with the original system, at 700 MHz, indicates average performance gains of 3.3, 5.3 and 3.8 times for L1 cache, L2 cache and RAM based data, increased from the old PiA6 version at 2.4, 4.5 and 3.5 times. The first calculations are the same as those that determine Linpack benchmark speeds, in this case gcc 4.8 single precision speeds are again slower than using the original benchmarks (Pia7 vfma.f32 instructions and Pia6 fmacs). The PiA7 integer calculations provide the highest performance gains, from cached data, the test loop containing 2 vector loads to quad word registers (vld1.32), 2 vector adds (vadd.i32) and one vector store ( vst1.32), compared with 8 loads, 8 adds and 4 stores in PiA6.

Results for a version compiled to use NEON instructions, providing some of the fastest speeds, are included below. For more details see MemSpeed NEON.

Later results are for the same code compiled for Android devices, less the copy tests, where the later ARM systems are considerably faster. In this case, The Pi performs relatively well on single precision floating point. For other results see Android Benchmarks.htm.

The other results are using the Intel/Linux version, where speeds are generally much faster. An exception is L1 cache speed using single precision floating point, where the Pi is faster than the Atom on a MFLOPS/MHz basis. For older PC speeds that are slower than the Raspberry Pi see MemSpd2k results.htm.


   Raspberry Pi CPU 1000 MHz, Core 500 MHz, SDRAM 600 MHz, 6 volts


   Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S

       8     602    640   1185    930   1163   1662   1422    511    761 L1
      16     787    930   1292    853   1023   1523   1777    537    761
      32     487    426    487    465    426    568   1939    820   1142 L2
      64     465    393    465    426    393    511    592    457    508
     128     330    310    341    320    301    365    341    301    341
     256     208    200    213    204    200    217    196    170    189 RAM
     512     204    200    213    200    200    213    196    176    182
    1024     213    200    208    200    200    217    196    170    182
    2048     204    196    213    204    200    217    196    170    182
    4096     204    200    213    200    200    217    196    170    182
    8192     204    200    213    200    200    218    204    169    182

 Max MFLOPS   98    232            58    145
 Max MIPS                 1007                  980    667    563    785
 Max MOPS                  323

 ############################## RPi 2 ##################################

   Raspberry Pi 2 CPU 900 MHz, Core 250 MHz, SDRAM 450 MHz

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S
 PiA6
       8     731   1280   1142   2133   1454   1422   2666   1523   1641 L1
      16    1066   1333   1292   1969   1406   1523   2666   1523   1641
      32    1023   1293   1094   1828   1333   1406   2051   1428   1428
      64     930   1016   1067   1662   1185   1230   1230   1333   1333 L2
     128     853   1016   1023   1524   1186   1186   1163   1454   1333
     256     853   1068    930   1423   1186   1186   1143   1455   1455
     512     602    853    787   1168    853    930   1144   1027   1066
    1024     365    512    393    465    538    426    984    511    465 RAM
    2048     310    445    310    353    465    330    853    496    496
    4096     301    445    301    341    445    330    834    546    511
    8192     307    446    317    351    446    338    945    580    580

 Max MFLOPS   133    333
 Max MOPS                  323

 PiA7
       8     929    832   2047   2044   1366   2862   2035   2690   2845
      16    1398   1197   2050   2049   1368   2868   2044   2861   2861
      32    1264   1094   1768   1773   1227   2272   1700   2159   2160
      64    1195   1042   1634   1635   1161   1997   1450   1479   1488
     128    1133    991   1512   1526   1095   1792   1154   1121   1124
     256     961    981   1500   1506   1089   1787   1132   1078   1064
     512     629    669    895    878    717    979   1146    786    788
    1024     400    396    470    458    413    496    943    642    644
    2048     326    313    357    354    328    374    958    678    678
    4096     322    311    354    351    326    372    954    721    718
    8192     325    311    355    353    327    372    952    732    733

 Max MFLOPS  175    299
 Max MOPS                  512

 ########################### RPi 2 OC ##################################

  Raspberry Pi 2 CPU 1000 MHz, Core 500 MHz, SDRAM 500 MHz, over_voltage=2

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S
 PiA6
       8     682    853   1306   2327   1523   1777   2909   1777   1777
      16    1185   1406   1306   2327   1523   1777   2666   1777   1777
      32    1185   1333   1333   1969   1523   1523   2279   1641   1641
      64    1023   1293   1094   1778   1333   1306   1882   1599   1428
     128    1023   1186   1094   1641   1230   1333   1641   1524   1539
     256     930   1142   1016   1642   1333   1333   1778   1429   1524
     512     682    930    787   1094    930    930   1642   1068    984
    1024     465    602    487    568    639    538   1168    618    618
    2048     379    538    409    465    538    409    914    597    597
    4096     379    538    379    445    538    409    904    658    682
    8192     378    546    393    446    546    427    819    750    760

 Max MFLOPS  148    351
 Max MOPS                  333

 PiA7
       8     918    928   2261   2258   1509   3162   2248   3142   3143
      16    1547   1322   2265   2264   1511   3168   2258   3160   3160
      32    1536   1314   2251   2245   1501   3146   2247   3141   3130
      64    1296   1135   1773   1776   1263   2134   1795   1789   1797
     128    1226   1098   1679   1676   1213   1996   1822   1483   1486
     256    1013    985   1442   1446   1083   1672   1549   1311   1304
     512     568    553    694    682    579    742   1371    989    993
    1024     473    465    550    548    485    591   1279    913    916
    2048     413    400    459    456    415    484    943    688    688
    4096     410    398    455    446    411    480    871    620    620
    8192     411    399    457    454    412    482    847    601    600

 Max MFLOPS  193    330           142    189     
 Max MOPS                  566

########################### RPi 2 NEON #################################

   Raspberry Pi 2 CPU 900 MHz, Core 250 MHz, SDRAM 450 MHz

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S

       8     918   1778   2031   2029   2369   2838   2020   2825   2823
      16    1388   1781   2034   2034   2374   2847   2029   2840   2828
      32    1380   1768   2021   2020   2357   2811   2024   2832   2831
      64    1169   1435   1595   1597   1785   1924   1573   1392   1391
     128    1124   1366   1509   1513   1688   1794   1608    990    986
     256     875   1163   1270   1269   1391   1460   1163    892    900
     512     675    886    953    941   1022   1074   1081    776    785
    1024     363    401    409    399    419    428    904    596    596
    2048     318    338    341    343    355    362    751    539    541
    4096     316    333    339    339    351    359    720    501    503
    8192     317    334    340    340    352    361    709    483    484

 Max MFLOPS  174    445           127    297
 Max MOPS                  509

 ######################## RPi 2 NEON OC ################################

  Raspberry Pi 2 CPU 1000 MHz, Core 500 MHz, SDRAM 500 MHz, over_voltage=2

    Memory Reading Speed Test NEON 32 Bit Version 1 by Roy Longbottom

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S
 NEON
       8    1542   1963   2257   2253   2633   3143   1672   2143   3078
      16    1542   1978   2248   2258   2638   3163   2247   3111   3116
      32    1402   1744   1961   1965   2221   2481   1958   2532   2534
      64    1303   1596   1770   1778   1988   2146   1700   1756   1756
     128    1242   1508   1665   1667   1862   1977   1599   1458   1467
     256     976   1276   1376   1395   1532   1483   1610   1313   1315
     512     756    966   1031   1020   1111   1156   1643   1099   1107
    1024     476    544    569    554    584    606   1376    953    956
    2048     401    432    447    444    458    471   1268    968    967
    4096     401    429    443    436    455    466   1239   1043   1039
    8192     404    434    448    446    460    472   1001    777    779

 Max MFLOPS  193    493           141    330
 Max MOPS                  562

############################# Other ####################################

   Android MemSpeed Benchmark 17-Oct-2012 20.19
       ARM Cortex-A9 1300 MHz, 1 GB DDR3 RAM

              Reading Speed in MBytes/Second
  Memory  x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]
  KBytes   Dble   Sngl    Int   Dble   Sngl    Int

      16   1735    888   2456   2726   1364   2818 L1
      32   1448    760   1474   1700   1039   1648
      64   1318    719   1290   1468    952   1385 L2
     128   1279    715   1289   1443    944   1336
     256   1268    714   1279   1435    943   1313
     512   1158    691   1204   1321    892   1228
    1024    729    553    735    772    632    742 
    4096    445    392    425    442    421    439 RAM
   16384    435    390    428    435    412    431
   65536    445    404    393    450    432    449


                  Intel Atom 1666 MHz memspeedIL

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S

       8    1720    853   2150   2203   1086   3686   1379   1851   1785 L1
      16    1612    825   2051   2150   1075   2962   1599   1777   1612
      32    1517    825   1785   2019   1041   2666   1290   1388   1379 L2
      64    1470    825   1785   2051   1041   2580   1379   1333   1646
     128    1724    948   2272   2580   1358   3463   1612   1785   1785
     256    1725    948   2299   2499   1403   3572   1613   1731   1785
     512    1624    914   2151   2349   1315   3228   1533   1670   1668
    1024    1590    882   1990   2155   1296   2515   1251   1292   1292 RAM
    2048    1590    882   1998   2095   1263   2235   1081   1117   1076
    4096    1553    914   1951   2111   1279   2180   1076   1084   1055
    8192    1592    910   1985   2113   1279   2171   1092   1085   1119


             Core 2 2400 MHz, Dual channel DDR2 RAM, memspeedIL

  Memory   x[m]=x[m]+s*y[m] Int+   x[m]=x[m]+y[m]         x[m]=y[m]
  KBytes    Dble   Sngl  Int32   Dble   Sngl  Int32   Dble   Sngl  Int32
    Used    MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S   MB/S

       8   17427   6736   6249  12498   6450   6399  12498   6348   6348 L1
      16   13839   6450   6249  12498   6450   6450  12985   6348   6249
      32   16664   6249   6450  13134   6399   6450  12498   6348   6143
      64   10751   4999   5262   7528   4999   5332   5119   3555   3555 L2
     128    7831   4999   5332   7313   4999   5333   5119   3703   3703
     256   11494   4999   5332   7691   4999   5333   5208   3555   3656
     512   11347   5160   5333   7313   4999   5264   5209   3555   3656
    1024    9142   5160   5333   7699   5160   5332   5211   3707   3656
    2048   10239   5007   5341   7528   4949   5341   5119   3555   3451
    4096    7110   4790   5023   6920   4790   5023   4013   3135   3236
    8192    3949   3686   3813   4031   3794   3794   2047   2015   1974 RAM

   


To Start


Bus Speed Benchmark - busspeedPiA6 busspeedPiA7

This benchmark is designed to identify reading data in bursts over buses. The program starts by reading a word (4 bytes) with an address increment of 32 words (128 bytes) before reading another word. The increment is reduced by half on successive tests, until all data is read.

Until faster memory and multiple core systems came along, interpretation of Intel results was reasonably easy. Besides considering burst reading, there are start up latency delays, specified as a number of bus clocks (CAS). The Atom below can achieve a maximum speed of 6400 MBytes per second. Maximum achievable speed from RAM is suggested to be 135 x 32 = 4320 MB/s, with less CAS overheads than reading all data at 3262 MB/s. Bursts appear to be 32 words or 128 bytes, 16 transfers and 8 bus clocks. Latency is detected as CAS 6. So, achievable speed could be 8/14 x 6400 = 3657 MB/s - not a bad estimate.

No such luck with earlier ARM and Android based systems. The following Nexus 7 has four 1.3 GHz CPU cores and needs all to be accessing memory, at the same time, for higher data transfer speeds (see Android Multithreading Benchmarks). Even then, maximum speed could be 216 x 16 or 3456 MB/s but only achieves 1351 MB/s. Later Androids can provide significant improvements, as indicated by the Galaxy SIII results, now included below.

A preliminary run of the gcc 4.8 compiled BusSpeed multithreading benchmark indicated that two Raspberry Pi 2 cores can produce a respectably throughput from RAM, as shown below. However, there is a problem. The results are valid for multiple cores reading the same data from memory but Raspberry Pi 2 has a 512 KB L2 cache, shared by all cores, and that can distort the measurements (other CPUs shown could have similar).

                                                                    Bus
          Inc32  Inc16   Inc8   Inc4   Inc2   Read  Clock    DDR  Width    Max 
          Words  Words  Words  Words  Words    All    MHz     x2  Bytes  MB/sec
  
   Atom     135    262    541   1048   1973   3262    400    800    x 8   6400            

   Nexus 7          56     82    125    174    334    666   1333    x 4   5333
   2 Threads       114    186    250    346    673
   4 Threads       216    334    228    695   1351
 
   Galaxy SIII      89    200    376    739   1184    533   1066    x 8#  8528
   2 Threads       179    407    797   1449   2205
   4 Threads       359    334   1227   1183   4038
                                                                 # dual channel

   Raspberry Pi 2   83    165    295    632   1262    450    900    x 4   3600
   2 Threads       165    228    579   1246   2382
   4 Threads       122    281    691   1537   2469
  

Below are the Raspberry Pi results from busSpeed.txt log file, running at the default speed settings. The program main test had 64 C statements that translate into 64 load and 64 AND instructions. With loop overheads that translates to 132 instructions on 256 bytes, where MIPS will be MB/second x 0.516.

The results suggest that data transfer bursts are 32 bytes (8 transfers of 4 bytes), with a possible maximum speed of 8 x 34 = 272 MB/second, at this single core level. They imply that there is also burst reading from caches besides using RAM, and performance of the latter is not very good, with this single core CPU.


 Raspberry Pi CPU 700 MHz, Core 400 MHz, SDRAM 400 MHz
   Maximum speed 400 x 2 (DDR) x 4 Width = 3.2 GB/sec

   BusSpeed 32 Bit V1.1 Wed May 22 15:28:01 2013

     Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read
  KBytes  Words  Words  Words  Words  Words    All

      16    290    304    568    984   1125   1142 L1
      32    133    116    131    133    225    465 L2
      64    116     98    116    109    192    409
     128     60     54     62     68    126    273
     256     34     34     34     43     88    192 RAM
     512     34     34     34     45     91    200
    1024     34     31     34     45     91    181
    4096     32     33     33     45     87    183
   16384     32     32     34     44     83    186
   65536     34     32     34     44     88    186

        End of test Wed May 22 15:28:13 2013



To Start


Bus Speed Comparison

The first one for comparison is the overclocked Pi, where most results are as might be expected at the higher clock frequencies but, again, with some L2 cache speeds quite a bit faster.

Raspberry Pi 2 results are shown with the CPU at 900 MHz and overclocked to 1 GHz, corresponding SDRAM frequencies are 450 and 500 MHz. The busspeedPiA6 speeds are most unusual, on reading all data, where speed, on reading all data, is slower than reading every other word. Assembly Code appears to show that there is little difference in generated instructions, from the two versions, except PiA7 uses negative indexing. Comparisons, shown with PiA7 1 GHz details, suggest that speed from RAM is at least 2.5 times faster from gcc 4.8. The other comparisons are for busspeeddPiA6, where the highest performance gains, of RPi 2, are via data in L2 cache.

Next results are for one CPU core on a Nexus 7, with a 1300 MHz ARM Cortex-A9 processor. The overclocked Pi is not too far away on RAM performance but falls behind on L1 and L2 cache based data.

The two Intel examples are clearly much faster but BusSpd2k Results on PCs provides results on older systems where the Raspberry Pi is the winner (ignore the last two columns for MMX instructions). There are also results of some slower systems in Android Benchmarks.htm.


 Raspberry Pi CPU 1000 MHz, Core 500 MHz, SDRAM 600 MHz, 6 volts

    Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read
  KBytes  Words  Words  Words  Words  Words    All

      16    290    387    984   1505   1575   1750 L1
      32    246    186    232    232    393    731 L2
      64    146    113    131    148    273    546
     128    102     87     93    113    210    420
     256     53     48     53     75    131    303 RAM
     512     48     48     50     75    137    300
    1024     48     50     49     69    139    305
    4096     50     52     52     72    134    299
   16384     48     52     52     69    139    296
   65536     49     52     49     72    139    291

 ############################## RPi 2 ##################################

   Raspberry Pi 2 CPU 900 MHz, Core 250 MHz, SDRAM 450 MHz

    Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read
  KBytes  Words  Words  Words  Words  Words    All
 PiA6
      16   1346   1428   1575   1641   1706   1489 L1
      32    930    984   1163   1422   1489   1641
      64    426    372    630   1024   1365   1365 L2
     128    341    380    682   1137   1462   1191
     256    213    232    512    813   1191   1169
     512    129    136    273    570    840    782
    1024     73     83    167    360    685    412 RAM
    4096     63     76    152    293    629    322
   16384     69     74    149    314    599    335
   65536     69     78    148    279    629    335

 PiA7
      16    950   1509   1632   1726   1734   1738
      32   1240   1318   1437   1716   1633   1681
      64    419    429    747   1214   1479   1587
     128    386    411    702   1211   1572   1625
     256    367    399    691   1194   1573   1634
     512    138    164    313    598    990   1363
    1024     79     88    175    372    673   1264
    4096     66     76    154    300    632   1266
   16384     71     77    154    299    633   1264
   65536     71     76    154    297    633   1261

 ########################### RPi 2 OC ##################################

  Raspberry Pi 2 CPU 1000 MHz, Core 500 MHz, SDRAM 500 MHz, over_voltage=2

    Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read  Pi2/Pi
  KBytes  Words  Words  Words  Words  Words    All  1 GHz
 PiA6
      16   1066   1662   1706   1975   1861   1896   1.08
      32    930   1163   1367   1706   1706   1861   2.55
      64    465    474    820   1219   1575   1462   2.68
     128    372    426    787   1241   1706   1490   3.55
     256    393    426    745   1260   1626   1491   4.92
     512    266    281    522    916   1367   1196   3.99
    1024    105    114    249    456    913    508   1.67
    4096     93    115    220    396    880    419   1.40
   16384    100    113    227    419    838    441   1.49
   65536     97    111    209    419    883    447   1.54

                                                    A7/A6
 PiA7                                               1 GHz
      16   1554   1662   1813   1894   1892   1894   1.00
      32    629    648    911   1328   1604   1756   0.94
      64    453    461    803   1245   1572   1752   1.20
     128    394    430    773   1284   1705   1783   1.20
     256    280    410    747   1306   1733   1798   1.21
     512    242    253    472    891   1335   1607   1.34
    1024    107    122    243    481    919   1287   2.53
    4096     95    108    216    420    886   1204   2.87
   16384     98    108    216    419    885   1205   2.73
   65536     99    109    216    419    888   1204   2.69

############################# Other ####################################



     Android BusSpeed Benchmark 19-Oct-2012 17.29
       ARM Cortex-A9 1300 MHz, 1 GB DDR3 RAM
      RAM 1 GB DDR3L-1333 Bandwidth 5.3 GB/sec

    Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read
  KBytes  Words  Words  Words  Words  Words    All

      16   2723   2420   3044   3364   3499   3500 L1
      32   1054   1087   1061   1382   1565   2145
      64    436    433    419    652    751   1160 L2
     128    345    337    337    542    633    943
     256    329    309    322    522    614    961
     512    339    299    311    506    574    937
    1024    170    168    180    269    349    629
    4096     59     55     84    127    176    338 RAM
   16384     56     56     83    125    173    335
   65536     56     56     82    125    174    334


        Intel Atom 1666 MHz busspeedIL

    Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read
  KBytes  Words  Words  Words  Words  Words    All

      16   3703   5160   5881   6249   6399   6529 L1
      32    484    396    745   1474   2499   3931 L2
      64    484    393    787   1516   2482   3878
     128    491    410    775   1462   2509   3923
     256    492    415    775   1454   2540   3887
     512    225    327    606   1213   2184   3534
    1024    130    266    533   1034   1952   3306 RAM
    4096    126    262    524   1048   1941   3313
   16384    135    270    508   1048   1917   3276
   65536    135    262    541   1048   1973   3262


 Core 2 2400 MHz, Dual channel DDR2 RAM, busspeedIL

    Reading Speed 4 Byte Words in MBytes/Second
  Memory  Inc32  Inc16   Inc8   Inc4   Inc2   Read
  KBytes  Words  Words  Words  Words  Words    All

      16   6535   5516   6059   6490   6205   6304 L1
      32   5925   3225   3938   6023   6094   5966
      64   1721   1305   2154   3047   4444   5269 L2
     128   1407   1333   2172   3033   4571   5333
     256   1538   1365   2206   3047   4432   5334
     512   1391   1376   2150   3102   4552   5336
    1024   1377   1376   2202   3104   4519   5460
    4096    731    814   1425   2206   3669   4882
   16384    345    380    761   1310   2530   4343 RAM
   65536    321    374    748   1310   2485   4066

  


To Start


Java Benchmarks

Java programs can run via any Operating System, assuming that a compatible Java RunTime Environment (JRE) is available. The JRE translates a general purpose .class file into hardware dependent computer instructions. The .class files are produced using a Java Development Kit (JDK) and these can be run via suitable Operating Systems.

Java programming code can be arranged in two different ways, that is off-line, in this case via a Terminal command (example java myprog), or on-line as an Applet launched by an HTML document. In both cases, the .class files are produced using javac command (example javac myprog.java).

Relating my experiences, the initial Rasbian Operating System had no JREs or JDKs installed. After executing the following, the command java -version indicated Java version “1.6.0_27”. Java Applets and off-line files could then be executed but floating point arithmetic was painfully slow.

sudo apt-get update sudo apt-get install icedtea-plugin

It was then discovered that Oracle Java SE 8 Developer Preview for ARM was needed to provide high speed hard float support. This and JRE 7 were installed using Instructions to install Java 8 and 7 (Then the various JREs could be selected using sudo update-alternatives --config java).

Using JRE 8 produced the desired effect with off-line Java but made no difference accessing on-line Applets. This question in Raspberry Pi Forum provided a solution to at least faster floating point. This specifies changes to .cfg files to enable JamVM to be used as an alternative Java Virtual Machine. Further details from a Pi Forum message suggested that Cacao VM would be faster than JamVM on floating point calculations, and this proved to be true. The available version was icedtea-6-jre-cacao, or for JRE 6. It seems that this is run using the command java -cacao program and -jamvm can be used or -zero for the original slow version, but only when JRE 6 is selected.

Using a newly installed Raspbian, the on-line versions only run via the Midori browser. After loading the page, it can take longer than 10 seconds before a benchmark starts running.

As usual the benchmarks and source codes are included in Raspberry_Pi_Benchmarks.zip.

To Start


Java Whetstone Benchmarks

Details of the Whetstone Benchmark are provided above. Both off-line and on-line versions are provided in the zip file, including source code for off-line and Applet versions, along with the HTML page to run the program. A text log file is produced with the off-line version but a screen copy (scrot -s command, click on browser - needs installing - sudo apt-get install scrot) has to be made for on-line runs, if a record of performance is required. Examples of both are below.

The benchmark .class files were compiled using JDK 1.6 via Linux Ubuntu 10.04, and these run via Windows and Linux. The zip file also includes .class files produced by JDK 7 on the Raspberry Pi. WARNING: this failed to run via Ubuntu using JRE 6 but it does on the Pi that also has JRE 7.

The on-line versions are run by clicking on whetjava2.htm (or right click and select browser) and the off-line varieties using the command “java whetstc”. The on-line version can also be run via Online Benchmarks.htm.


     Whetstone Benchmark Java Version, May 27 2013, 18:09:00

                                                       1 Pass
  Test                  Result       MFLOPS     MOPS  millisecs

  N1 floating point  -1.124750137     49.18             0.3904
  N2 floating point  -1.131330490     46.54             2.8880
  N3 if then else     1.000000000              27.73    3.7320
  N4 fixed point     12.000000000              92.48    3.4060
  N5 sin,cos etc.     0.499110103               1.08   77.3100
  N6 floating point   0.999999821     26.69            20.2100
  N7 assignments      3.000000000              39.90    4.6320
  N8 exp,sqrt etc.    0.751108646               0.31  119.9300

  MWIPS                               43.01           232.4984

  Operating System    Linux, Arch. arm, Version 3.6.11+
  Java Vendor         Oracle Corporation, Version  1.8.0-ea

  

Screen Copy


To Start


Java Whetstone Comparison

Following are off-line and on-line results, showing the changes in performance through using upgraded JREs and with overclocking (CPU 1000 MHz, Core 500 MHz, SDRAM 600 MHz, 6 volts). There is not much difference in overall MWIPS on using JamVM and the original is faster on the COS/EXP tests, but much slower on the other tests. Similarly, JRE 8 averages about four times faster than JRE 7 with JamVM, if the COS/EXP results are excluded. Later results, using Cacao VM, show that this is much faster than JamVM, again except for the COS/EXP functions.

The off-line versions ran without any problems on the Raspberry Pi 2, other than the EXP test that was particularly slow via JRE 8, reducing the overall MWIPS rating. Other than this, performance via JRE 8 was significantly better than using JRE 6 or JRE 7, as it was on the original RPi. JRE 8 RPi 2 speeds were between 1.5 and 4.3 times faster than RPi 1.

I was unable to persuade Epiphany Browser to use JRE 7 or 8, to run the on-line version. I installed IcedTea 1.6, then Midori Browser to obtain the first results (subject to providing permission to JRE 6). Then, out of the blue, Epiphany ran the benchmark applet, obtaining the same performance as Midori. Results, with the CPU at 1 GHz, were 2.0 to 5.7 times faster than those run on the earlier Raspberry Pi.

A result of the C version is also shown, along with another from the Android version. For further details and results see Android Benchmarks and Whetstone Benchmark Java Results.


  Version     JRE  MWIPS  ------MFLOPS-------   -------------MOPS---------------
                              1      2      3    COS    EXP  FIXPT     IF  EQUAL
  Off-line

  Original      6   18.3    4.4    6.4    3.3   0.99   0.30    7.9    2.9    2.5
  JamVM         6   23.4    9.4   10.0    8.9   0.69   0.23   17.8    8.1    5.4
  Cacao         6   32.7   25.5   36.7   25.7   0.76   0.24   55.2   28.9   25.8
  Original      7   18.7    4.3    6.2    3.5   0.98   0.30    7.9    2.9    2.6
  JamVM         7   25.7   12.3   11.7    9.7   0.74   0.24   23.6   10.9    6.2
  Original      8   47.8   49.4   47.8   26.7   1.19   0.36   93.3   27.8   40.0
  1000 MHz      8   75.1   71.4   69.2   39.8   2.10   0.53  134.9   40.3   57.8

  Raspberry Pi 2
   
  Original      6  101.8   30.3   43.6   20.2   2.89   1.98   60.7   38.2   15.0
  Original      7  100.8   30.4   43.6   19.9   2.84   1.99   60.8   38.3   14.8
  Original      8  117.4  118.8  125.3   62.2   3.89   0.74  278.8   60.8  224.8
  javac 1.7     7  100.6   30.4   43.6   19.9   2.83   1.99   60.8   38.3   14.8 
  javac 1.7     8  116.9  119.5  125.1   62.2   3.91   0.73  278.5   60.8  223.0
  javac 1.8     8  116.8  119.5  125.0   62.2   3.81   0.74  278.3   60.8  224.8
  1000 MHz 1.7  8  128.6  133.0  139.4   69.3   4.19   0.81  310.3   67.7  249.7        


  On-line

  JamVM         6   25.3    9.9   10.3    7.7   0.63   0.25   17.9    8.5    5.1
  1000 MHz      6   39.0   14.1   13.7   11.2   1.30   0.40   25.4   12.2    7.2

  Raspberry Pi 2

   900 MHz      6  101.8   40.8   42.8   26.5   2.90   2.01   65.9   51.4   12.4
  1000 MHz      6  120.4   45.7   47.9   28.5   3.20   2.27   73.8   57.7   14.1


              MHz
  C Version   700  270.5   97.8  100.8   85.7   5.90   2.70  425.3  698.6  499.0
  C RPi 2     900  525.0  252.0  261.3  223.0  10.20   5.10 1102.5 1358.4  882.0
  V7-A9 Java 1000  286.5   51.8   85.2   63.6  13.10   5.30  176.1   68.6   35.0

  


To Start


JavaDraw Benchmark

JavaDraw is intended to run the same test functions as my JavaDraw.apk benchmark for Android devices, where details and results can be found in Android Graphics Benchmarks.htm. The benchmark uses small to rather excessive simple objects to measure drawing performance in Frames Per Second (FPS). Five tests draw on a background of continuously changing colour shades. For further details and results see JavaDraw.htm, where links to on-line versions are also provided. However, some displays from these can be erratic and tearing.

  • Test 1 loads two PNG files, one bitmap moving left/right for each frame, the other circling. This is repeated twice, in this version, as the long start up time leads to slow speeds being reported.

  • Plus Test 2 for JavaDraw.apk generates 2 SweepGradient multi-coloured circles moving towards the centre and back. The circles are loaded a PNG file for this version.

  • Plus Test 3 draws 200 random small circles in the middle of the screen.

  • Plus Test 4 draws 80 lines from the centre of each side to the opposite side, again with changing colours.

  • Plus Test 5 draws the same small random circles as Test 3 but with 4000, filling the screen.

  • Each test runs for approximately 10 seconds at window size 1280 x 720 pixels.

  • Two versions are available, JavaDrawPC, compiled using JDK 6 via Linux Ubuntu, and JavaDrawPi produced on the Pi using JDK 7. Both can be run via Windows and Linux, subject to tha appropriate JRE being available. Commands to use are “java JavaDrawPC” and “java JavaDrawPi”.

Measured speeds are displayed in the Terminal window and in JavaDraw.txt log file. An example is shown below, preceded by the display during Test 4. The benchmark identifies the Operating System and JRE used.

JavaDraw Screen Copy


   Java Drawing Benchmark, May 30 2013, 12:40:39
            Produced by javac 1.7.0_02

  Test                              Frames      FPS

  Display PNG Bitmap Twice Pass 1       24     2.39
  Display PNG Bitmap Twice Pass 2      118    11.72
  Plus 2 SweepGradient Circles         116    11.56
  Plus 200 Random Small Circles         95     9.48
  Plus 320 Long Lines                   56     5.60
  Plus 4000 Random Small Circles        20     1.92

         Total Elapsed Time  60.8 seconds

  Operating System    Linux, Arch. arm, Version 3.6.11+
  Java Vendor         Oracle Corporation, Version  1.7.0_07

  


To Start


JavaDraw Comparison - Frames Per Second

Following are Raspberry Pi results at normal and overclocked settings, using JRE 6, 7 and 8. Basic JRE 6 and with JamVM produced similar results with Cacao VM being slightly faster. A surprise was that JRE 8 speeds were much slower (early release - see later results below). JavaDrawPC (from JDK 6) and JavaDrawPi (from Pi JDK 7) also produced similar performance, via JRE 7.

Raspberry Pi 2 results provided an average speed gain of 4.2 times for JRE 7, the CPU running at 1000 MHz, with JRE 8 performance being even better. As the original tests, on the older RPi, indicated extremely slow performance using JRE 8, the software was updated to a new version, and the benchmark rerun. Further tests were carried out, a second one restricted to using one CPU core, whilst running vmstat performance monitor at the same time - See Details and results below. These confirm that JavaDraw can use more than one core to improve performance.

Android results on a Nexus 7 are also shown and these are matched by the Pi at 1 GHz. The final much faster results are for running the same tests via Linux on an Atom and Core 2 based PCs. The screen shot above was from a PC the a quad core 3 GHz Phenom CPU under Windows, where CPU utilisation was around 57%, indicating that more that two cores were fully utilised.


                           PNG       PNG     +Sweep     +200       +320    +4000
                         Bitmaps   Bitmaps  Gradient    Small      Long     Small
                  JRE       1         2      Circles   Circles    Lines    Circles

  Pi    700 MHz     6      3.6      12.0      11.9       9.5       5.5       1.8
  Pi    700 MHz     6 Cac  0.2      13.6      14.7      12.4       7.2       2.8
  Pi    700 MHz     7      2.4      11.7      11.6       9.5       5.6       1.9
  Pi    700 MHz     8      0.4       2.7       2.6       1.9       0.8       0.4
  Pi   1000 MHz     6     10.1      19.5      19.3      15.9       9.4       3.1
  Pi   1000 MHz     6 Cac  8.3      23.1      21.9      18.2      11.0       4.2
  Pi   1000 MHz     7     11.1      19.2      18.7      16.2       9.5       3.1     
  Pi   1000 MHz     8      2.0       4.3       4.2       3.0       1.3       0.6

  Later Java
  Pi    700 MHz     7      0.3      10.9      10.8       8.2       5.0       1.3 
  Pi    700 MHz     8      0.2       7.2      10.7       11.7      7.7       5.1

  Raspberry Pi 2

  Pi 2  900 MHz     6     43.1      54.3      54.2      48.4      31.8      18.4
  Pi 2  900 MHz     7     40.8      52.5      51.9      46.9      30.5      17.1
  Pi 2  900 MHz     8     44.4      56.8      57.3      55.0      38.6      25.2

  1 CPU 900 MHz     8     22.1      35.8      35.4      36.0      38.3      22.4

  Pi 2 1000 MHz     6     51.4      65.7      64.5      57.5      37.4      20.3
  Pi 2 1000 MHz     7     51.5      63.7      62.8      56.4      36.9      20.1
  Pi 2 1000 MHz     8     55.0      69.5      70.0      67.7      46.4      29.5


  Nexus 7 1300 MHz                  20.4      16.5      14.5      11.3       3.8
  Atom    1666 MHz        57.3      83.2      80.1      74.8      53.6      24.5 
  Core 2  2400 MHz       271.5     360.6     227.7     237.6     205.2     142.5

                      Cac = Cacao VM
   


To Start


OpenGL ES Benchmark - OpenGL1Pi.bin

This benchmark is essentially the same as JavaOpenGL1 described in Android Graphics Benchmarks.htm. This has four tests that draw a background of 50 cubes first as wireframes then colour shaded. The third test views the cubes in and out of a tunnel with slotted sides and roof, also containing rotating plates. The last test adds textures to the cubes and plates. The 50 cubes are redrawn 15, 30 and 60 times, with randomised positions, colours and rotational settings. With 6 x 2 triangles per cube, minimum triangles per frame for the three sets of tests are 9000, 18000 and 36000.

Speed is measured in Frames Per second (FPS). With Android, maximum FPS is 60, limited by the imposition of wait for vertical blank (VSYNC). So, there is not much point in using lighter loading. As VSYNC appears not to be forced under Raspbian, additional tests using five cubes (x15 repeats) are included.

JavaDraw Screen Copy

The commands to compile the OpenGL ES program were extracted from sample program hello_pi makefile.

Nominal duration of each test is 10 seconds. Actual elapsed times and FPS scores are displayed on the LXTerminal display as the tests progress. On completion, results are saved in a text log file. See example below, along with compile and execute commands, the latter having parameters that define the window size to use. As usual, the benchmark, source code and image files used are available in Raspberry_Pi_Benchmarks.zip.

Of particular note, CPU utilisation, shown in Task Manager, is less than 50% for the most stressful test. The run time parameters were changed to allow the benchmark to run for a specified time- see Reliability Tests. This still runs 16 tests but each generates 36000 textured triangles.

June 2015 - Version 1.2 produced. The original version was found to be counting frames twice, doubling FPS speed results. This is not important when comparing performance at different system settings or with Raspberry Pi 2. The revised program has the correct frame count. The results suggest that displays are synchronised to run at a maximum of 50 FPS, using the UK standard frequency of 50 Hz, as the VSYNC setting.


  Compile Commands  Use the two cc extremely long (>512 charas) compile
                    commands and the cc link command in comments
                    at the start of  OpenGL1Pi.c

  Make files        These are now included in the zip file. A make 
                    comaand executes Makefile that uses Makefile.include to
                    comopile and link the benchmark programs.

  Run Commands      ./OpenGL1Pi.bin Wide pppp, High pppp RunTime mm
                    pppp = pixels, mm - minutes for reliability test
  Default           ./OpenGL1Pi.bin  - 1280 x 720, 16 x 10 second tests
                    parameter names just first letter used upper or lower case       
  pppp              any size e.g. W 1920, H 1080 - W 120, H 60 - W 60, H 120


 Example OpenGLPi.txt Log File 

 Raspberry Pi OpenGL ES Benchmark 1.2, Mon Jun  8 11:22:12 2015

                --------- Frames Per Second --------
      Triangles WireFrame   Shaded  Shaded+ Textured

          900+      50.05    50.01    43.50    39.30
         9000+      20.20    20.06    15.06    11.60
        18000+      10.27    10.19     8.72     6.41
        36000+       5.15     5.13     4.74     3.43

             Screen Pixels 1280 Wide 720 High

            End Time Mon Jun  8 11:24:54 2015
  


To Start


OpenGL ES Comparison - Frames Per Second

The following results show that maximum overclocking, larger window sizes and smaller ones do not produce significant variations in performance.

Raspberry Pi 2 - The benchmarks were run on RPi 2 and resultant speeds are little different. Measured CPU utilisation was typically 6% or 24% of one CPU core. Recompilation with Cortex A7 parameters made no difference. Details are below.

Lastly, some Android JavaOpenGL1 results are shown for comparison purposes.


 ############ Original Raspberry Pi ############# 

   RPi 700 MHz, Screen Pixels 1280 x 720

             --------- Frames Per Second --------
   Triangles WireFrame   Shaded  Shaded+ Textured

       900+      50.05    50.01    43.50    39.30
      9000+      20.20    20.06    15.06    11.60
     18000+      10.27    10.19     8.72     6.41
     36000+       5.15     5.13     4.74     3.43


   RPi 1000 MHz, Screen Pixels 1280 x 720

       900+      50.07    50.01    43.82    39.58
      9000+      20.20    20.18    15.13    11.64
     18000+      10.25    10.25     8.76     6.42
     36000+       5.15     5.16     4.76     3.44


   RPi 700 MHz, Screen Pixels 1920 x 1080

       900+      50.05    50.01    43.50    39.30
      9000+      20.20    20.06    15.06    11.60
     18000+      10.27    10.19     8.72     6.41
     36000+       5.15     5.13     4.74     3.43


   RPi 700 MHz, Screen Pixels 320 x 180

       900+      50.11    50.01    44.90    41.80
      9000+      20.60    20.49    15.33    12.79
     18000+      10.41    10.35     8.85     7.21
     36000+       5.23     5.20     4.79     3.87


 ################ Raspberry Pi 2 ################# 

   RPi 2 900 MHz, Screen Pixels 1280 x 720

           --------- Frames Per Second --------
   Triangles WireFrame   Shaded  Shaded+ Textured

       900+      50.07    50.00    44.76    41.10
      9000+      20.38    20.61    15.36    12.24
     18000+      10.37    10.42     8.90     6.89
     36000+       5.21     5.23     4.82     3.72


   RPi 2 900 MHz, Screen Pixels 1920 x 1080

       900+      50.07    50.00    43.32    38.94
      9000+      19.63    19.75    14.85    11.69
     18000+      10.15    10.03     8.60     6.02
     36000+       4.99     5.06     4.66     3.07


 ##################### Other ##################### 

   Andoid JavaOpenGL1 Galaxy SIII, Quad  Cortex-A9
   1.4 GHz, Android 4.0.4, ARM Mali-400 MP4 quad 
   core graphics. Screen Pixels 1280 x 720
           --------- Frames Per Second --------
   Triangles WireFrame   Shaded  Shaded+ Textured

      9000+      57.98    59.62    51.93    41.19
     18000+      34.46    34.28    29.61    15.25
     36000+      14.45    13.11    13.03     7.34


   Andoid JavaOpenGL1 Nexus 7 Quad 1300 MHz Cortex-A9, 
   Android 4.1.2, nVidia ULP GeForce Graphics 12 core,
  416 MHz. Screen Pixels 1280 x 736

      9000+      42.18    43.57    33.38    23.54
     18000+      23.68    23.47    19.91    13.38
     36000+      12.05    11.95    11.00     7.10
   


To Start


OpenGL GLUT Benchmark - videogl32

In 2011, I produced a Linux version of my 2004 Windows VideoGL1 benchmark. Its pedigree was established in 2012, when I approved a request from a Quality Engineer at Canonical, to use this OpenGL benchmark in the testing framework of the Unity desktop software. One reason probably was that it can be run for extended periods as a stress test. Further details and Linux results are in Linux OpenGL Benchmarks.htm.

The OpenGL version required minimum conversion, with OpenGL code functions unchanged. The benchmark, source code and image files are included in the OpenGL folder in Raspberry_Pi_Benchmarks.zip, also separately in Raspberry_Pi_OpenGL_Benchmark.zip,

The benchmarks measure graphics speed in terms of Frames Per Second (FPS) via six simple and more complex tests. The first four tests portray moving up and down a tunnel including various independently moving objects, with and without texturing. The last two tests, represent a real application for designing kitchens. The first is in wireframe format, drawn with 23,000 straight lines. The second has colours and textures applied to the surfaces. The textures are obtained from 24 bit BMP files that can be up 256 x 256 pixels at 192 KB, with those supplied being 64 x 64 pixels at 12 KB.

After booting Raspbian-jessie on a Raspberry Pi 2, freeglut software was installed via:

sudo apt-get update               
sudo apt-get install freeglut3    
sudo apt-get install freeglut3-dev
  

The benchmark was coppiled and linked by the following Terminal command:

gcc ogl1.c cpuidc.c -lrt  -lm -O3 -lglut -lGLU -lGL -o videogl32
  

The default benchmark runs all tests, each for 5 seconds, at the current display size settings, with an output header (configuration details and main headings), one line of results and end messages (date and time). Other parameters are pixel dimensions W or Width and H or Height. Initial results, in the next section, were produced via the following script (runit included in zip file), to provide a single table with minimum additional data. The export command is a later addition to turn off Wait For Vertical Blank (or VSYNC), to demonstrate maximum speeds.

export vblank_mode=0                                
./videogl32 Width 320, Height 240, NoEnd            
./videogl32 Width 640, Height 480, NoHeading, NoEnd 
./videogl32 Width 1024, Height 768, NoHeading, NoEnd
./videogl32 NoHeading                               
  

Stress test results, running videogl32 and CPU tests, are included in the stress test report. See: Livermore Loops and Maximum MFLOPS benchmarks.

To Start


OpenGL GLUT Benchmark Comparisons

The first set of results demonstrated extremely slow speeds. Then, via sudo raspi-config I enabled the experimental desktop GL driver, to produce the much improved second set of results. These appear to be limited to a maximum of 50 Frames Per Second, assumed to be due to Wait For Vertical Blank (VSYNC) being active. Googling indicated that an export vblank_mode=0 command was needed. So this was added to the script file to produce the third report.

The fourth table loaded 192 KB BMP texture files instead of the default ones at 12 KB. These could reduce displayed speed by up to three times.

The fifth sores are with the system overclocked from 900 to 1000 MHz (1.11 times). Average improvements in FPS speeds were 1.13 times, with small window plain colour tests appearing to be up to 25% faster, these probably being more dependent on graphics speed.


                          First results

 ###################################################################

 GLUT OpenGL Benchmark 32 Bit Version 1, Mon Apr 18 10:01:21 2016

          Running Time Approximately 5 Seconds Each Test

 Window Size  Coloured Objects  Textured Objects  WireFrm  Texture
    Pixels        Few      All      Few      All  Kitchen  Kitchen
  Wide  High      FPS      FPS      FPS      FPS      FPS      FPS

   320   240      9.5      5.4      7.1      3.7      1.5      1.1
   640   480      3.5      2.9      2.8      1.9      1.3      0.7
  1024   768      1.5      1.3      1.3      1.3      1.0      0.4
  1824   984      0.7      0.6      0.6      0.5      0.7      0.2

                   End at Mon Apr 18 10:04:58 2016


           After enabling the experimental desktop GL driver

 ####################################################################

 GLUT OpenGL Benchmark 32 Bit Version 1, Mon Apr 18 10:18:33 2016

          Running Time Approximately 5 Seconds Each Test

 Window Size  Coloured Objects  Textured Objects  WireFrm  Texture
    Pixels        Few      All      Few      All  Kitchen  Kitchen
  Wide  High      FPS      FPS      FPS      FPS      FPS      FPS

   320   240     49.4     49.4     39.9     24.9     10.0      7.1
   640   480     50.0     49.4     30.1     23.8     10.0      7.1
  1024   768     47.2     45.4     24.7     23.3     10.0      7.0
  1920  1080     18.5     18.2     16.5     15.5      9.8      7.0

                   End at Mon Apr 18 10:20:48 2016


                     After disabling VSYNC

 ####################################################################

 GLUT OpenGL Benchmark 32 Bit Version 1, Tue Apr 19 09:02:30 2016

          Running Time Approximately 5 Seconds Each Test

 Window Size  Coloured Objects  Textured Objects  WireFrm  Texture
    Pixels        Few      All      Few      All  Kitchen  Kitchen
  Wide  High      FPS      FPS      FPS      FPS      FPS      FPS

   320   240    210.3    114.4     52.6     32.5     12.1      7.8
   640   480    115.0     89.5     48.5     30.6     11.9      7.7
  1024   768     47.9     46.7     37.5     28.3     11.6      7.6
  1920  1080     20.6     18.6     16.8     15.9     11.4      7.4

                   End at Tue Apr 19 09:04:45 2016


          Larger texture files - 192 KB instaed of 12 KB 
 
 ##################################################################

 GLUT OpenGL Benchmark 32 Bit Version 1, Tue Apr 19 13:30:49 2016

          Running Time Approximately 5 Seconds Each Test

 Window Size  Coloured Objects  Textured Objects  WireFrm  Texture
    Pixels        Few      All      Few      All  Kitchen  Kitchen
  Wide  High      FPS      FPS      FPS      FPS      FPS      FPS

   320   240    213.4    110.6     39.5     12.4     11.7      2.6
   640   480    111.1     84.1     34.0     12.1     11.9      2.5
  1024   768     49.1     47.0     27.7     11.0     11.7      2.4
  1920  1080     20.2     17.3     15.7      9.3     11.5      2.2

                   End at Tue Apr 19 13:33:07 2016


  Default Textures, Overclocked CPU at 1000 MHz (1.11 time faster)  
 
 #####################################################################

 GLUT OpenGL Benchmark 32 Bit Version 1, Thu Apr 21 15:41:04 2016

          Running Time Approximately 5 Seconds Each Test

 Window Size  Coloured Objects  Textured Objects  WireFrm  Texture
    Pixels        Few      All      Few      All  Kitchen  Kitchen
  Wide  High      FPS      FPS      FPS      FPS      FPS      FPS

   320   240    266.7    138.7     60.4     36.9     13.5      8.7
   640   480    126.7    103.8     56.4     35.8     13.5      8.8
  1024   768     55.3     51.1     41.4     32.3     13.1      8.5
  1920  1080     21.6     20.7     18.0     17.2     12.8      8.5

  Average Gain   1.14     1.14     1.12     1.13     1.13     1.13
   


To Start


DriveSpeed Benchmark

The main execution C code in version 1.0 was the same as the Android version. However, as some of the results were vastly different to a version produced for Linux, the program was revised. The execution and source code are again in Raspberry_Pi_Benchmarks.zip. The benchmark is provided to measure speeds of the main SD card drive and USB attached storage devices. In my case, a mini USB hub was used that has multiple ports and card reading slots. An example of results, displayed and saved in driveSpeed.txt log file, are shown below. Tests carried out and changes made are:

Test 1 - Write and read three 8 and 16 MB; Results given in MBytes/second
Test 2 - Write 8 MB, read can be cached in RAM; Results given in MBytes/second
Test 3 - Random write and read 1 KB from 4 to 16 MB; Results are Average time in milliseconds.
             The original version appeared to enable caching on reading.
Test 4 - Write and read 200 files 4 KB to 16 KB; Results in MB/sec, msecs/file and delete seconds.
             Version 1.0 included an extra “safe to remove” flush that increased file writing times.

Below is a log file for the benchmark running on the SD card. Raspberry Pi 2 speeds were little different (See Comparisons), except for the caching test where, results below demonstrate RPi 2 faster RAM speed.


 #####################################################

   DriveSpeed RasPi 1.1 Mon Dec 16 16:20:35 2013
 
 Current Directory Path: /home/pi/benchmarks/DriveSpeed
 Total MB   14894, Free MB   12338, Used MB    2556

                        MBytes/Second
  MB   Write1   Write2   Write3    Read1    Read2    Read3

   8     8.33     8.82     6.87    22.46    22.74    22.74
  16    14.45    14.07    19.45    22.66    22.78    22.76
 Cached
   8    45.95    49.94    58.35   156.96   156.18   155.54

 Random         Read                       Write
 From MB        4        8       16        4        8       16
 msecs      0.711    0.709    0.757     3.34     2.97     6.67

 200 Files      Write                      Read                  Delete
 File KB        4        8       16        4        8       16     secs
 MB/sec      1.49     2.54     3.72     5.35     8.65    11.91
 ms/file     2.75     3.23     4.41     0.77     0.95     1.38    0.086


                End of test Mon Dec 16 16:21:06 2013

 #####################################################
 
                    Raspberry Pi 2

   DriveSpeed RasPi 1.1 Sun Mar  1 10:43:41 2015
 
 Current Directory Path: /home/pi/benchmarks/drivespd
 Total MB    6266, Free MB    3444, Used MB    2822

                        MBytes/Second
  MB   Write1   Write2   Write3    Read1    Read2    Read3

 Cached
   8   101.13   118.31   143.66   487.11   495.06   481.95
   

As usual, the tests are run from an LX Terminal command (./DriveSpeed), pointing to the directory containing the benchmark. As shown above, there are also run time parameters for starting file size (example for 16 and 32 MB) and path for the data. The latter is particularly important for measuring speeds via USB connections. Format options for the run command to use a different path for data and file size is

./DriveSpeed MBytes nn, FilePath /dddd/dddd (or M nn, F /dddd/dddd).

Mounted USB devices can be identified by executing a df command, with my results shown below (/dev) for a USB Flash drive and a USB powered disk drive with two partitions, the first for a FAT formatted area and the second (long number) for a Linux bootable Ext4 section. The benchmark can be executed, as shown below, using the displayed path. Sometimes, a sudo command might be needed. The benchmark can also be saved on the USB drive and run from there.

At least my system appears to crash sometimes, on changing the USB drive, even after executing an unmount command (see below). A better option appeared to be via the Places tab on File Manager.


 Step 1 display paths

 pi@raspberrypi ~ $ df

 Filesystem      Size  Used Avail Use% Mounted on
 rootfs           15G  2.0G   12G  14% /
 /dev/root        15G  2.0G   12G  14% /
 devtmpfs        180M     0  180M   0% /dev
 tmpfs            38M  300K   38M   1% /run
 tmpfs           5.0M     0  5.0M   0% /run/lock
 tmpfs            75M     0   75M   0% /run/shm
 /dev/mmcblk0p1   56M   19M   38M  34% /boot
 /dev/sdc1       1.9G  222M  1.7G  12% /media/USB2         (USB2 = volume name)
 /dev/sda1        56M   19M   38M  34% /media/C522-EA52
 /dev/sda2       7.3G  1.7G  5.4G  24% /media/62ba9ec9-47d9-4421-aaee-71dd6c0f3707


 Execute Examples

 pi@raspberrypi ~/testdir $ ./DriveSpeed FilePath /media/USB2 
 pi@raspberrypi ~/testdir $ sudo ./DriveSpeed FilePath /media/that-long-path 


 Benchmark On USB Drive Examples

 pi@raspberrypi ~ $ /media/path/DriveSpeed 
 pi@raspberrypi /media/path $ ./DriveSpeed


 Possibility Permissions need setting

 pi@raspberrypi /media/bmarkhere $ sudo chmod 0777 DriveSpeed


 Unmount

 pi@raspberrypi ~/testdir $ sudo umount /media/path

   


To Start


DriveSpeed Comparison

Following are example results, but note that there can be considerable variations from different test runs. The first two of the SD cards have a Class 4 specification, where the number represents minimum speed for recording a video, in MBytes/second. SD 3 has a Class 10 rating but can be the slowest. SD 4 is a SanDisk Extreme Pro microSDHC UHS-1 Class 10 card, rated at up to 633X or 95 MB/second. This clearly has the fastest card writing speeds but, as for most reading speeds, is limited by bus clock frequency. CPU utilisation was less than 10% during writing and reading the large files. All SD cards are as Ext4 formatted as system drives. Rpi booting times are also shown, where SD 3 is again the slowest, maybe relative to random reading times.

Next are a series of USB Flash memory sticks with FAT formatting. St1 is a SanDisk Cruzer with maximum writing speed rated at 10 MB/s and has an 8 KB sector size. For reference, St2 is an old drive. Patriot Rage XT St3 write/read ratings are 25/27 MB/s with 4 KB sectors. St5 is a high speed SanDisk Extreme USB 3.0 drive, with write/read ratings of 110/190 MB/s and 16 KB sectors. St4 and St6 are the last two with Ext4 format. The main observations are that the faster drive provides little advantage on reading performance, limited by bus speed and other overheads, but produces the fastest writing speeds, with significant gains using Ext4 format over that using FAT.

The disk drive (USB2 HD) results probably reflect bus and RPi overheads, with similar performance to the fastest USB stick on large file tests, including gains from the improved formatting. This is not the case on random access and small files, particularly on writing and more so using FAT format.

The last results are on a Linux based PC with a 2.4 GHz CPU and USB2 sockets. Source code, identical that used for RPi, was compiled for the tests. The first is for a SATA based disk drive, with its superior performance, particularly on large files. Then there are results for USB sticks St1, St5 and St6, indicating faster hardware speeds and lower overheads, particularly on writing and reading large files. Then FAT formatting lead to worst performance writing small files.

Raspberry Pi 2 speeds are provided for the main SD card and USB sticks St5 FAT and St6 Ext4. also the fast micro SD 4 card via two different readers. The latter produced faster USB 2 speeds, on large files, using a USB 3 card reader. With other devices, performance could be somewhat better or worse than that via the original Raspberry Pi.

 
   MB/second 16 MB files                                                  Boot
 Large               Write1   Write2   Write3    Read1    Read2    Read3 Secomds
 
 SD Main  16 GB        11.5     10.3     11.5     22.7     22.7     22.8    36
 SD 2      4 GB         8.0      9.4      8.2     20.2     20.2     20.2    37
 SD 3      8 GB         3.8      6.7      4.6     18.3     18.4     18.2    59
 SD 4     16 GB        19.6     19.8     19.9     22.6     22.2     22.8    37

 USB2 St1 16 GB FAT     3.8      4.0      3.8     24.3     24.7     24.1
 USB2 St2  2 GB old     3.9      3.9      3.9     14.4     14.6     14.6
 USB2 St3  8 GB FAT     9.1      9.2      9.3     25.6     25.5     24.6
 USB2 St4  8 GB Ext4   11.8     11.7     10.8     25.6     25.3     25.3
 USB3 St5 32 GB FAT    17.2     17.3     16.2     25.9     26.1     26.0 [2]
 USB3 St6 32 GB Ext4   26.1     26.4     26.4     26.5     26.2     26.2 [2]
 USB3 St7 32 GB F2fs   22.0     22.0     22.3     24.9     25.4     25.8 [2]

 USB2 HD FAT           17.0     16.0     16.0     24.0     25.6     25.7
 USB2 HD Ext4          24.8     23.8     24.7     22.5     23.4     21.1

 Raspberry Pi 2
                                                                          Boot
                                                                         Seconds
 SD Main  8 GB         12.6     12.5     12.6     19.5     19.2     19.5    33
 SD 4A    16 GB        29.8     28.6     29.6     30.1     29.4     28.9
 SD 4B    16 GB        15.6     15.7     15.7     19.2     19.3     19.4

 USB3 St5 32 GB FAT     9.2      7.8     11.6     29.1     29.0     29.3
 USB3 St6 32 GB Ext4   19.0     19.2     26.3     24.1     30.3     30.3

 Linux

 Main HD PC Ext4       68.4     52.0     77.0     77.7     69.7     70.0
 USB2 St1 Linux FAT     4.3      3.4      4.0     26.7     26.1     25.8
 USB3 St5 Linux FAT    28.1     28.5     27.5     39.0     39.3     39.3 [1]
 USB3 St6 Linux Ext4   29.5     29.6     29.6     39.1     39.3     39.3 [1]
 USB3 St7 Linux F2fs   29.6     30.0     29.6     39.6     39.7     39.4 [1]

 
   Random milliseconds
                        Read                       Write
 From MB                  4        8       16        4        8       16
 
 SD Main Kingston     0.568    0.538    0.535      4.6      5.0      5.2
 SD 2    PNY          0.821    0.775    0.997     11.1     26.6     28.9
 SD 3    Verbatim     0.995    1.076    1.144      8.5    113.8     70.3
 SD 4   SanDisk EP    0.748    0.735    0.696      2.6      4.4      2.4

 USB2 St1 San Cruzer  0.806    0.799    0.791     20.1     22.2     62.9
 USB2 St2 Old         0.906    0.888    0.889     42.2     56.7    291.7
 USB2 St3 Patriot Rge 0.817    0.789    0.937      3.7     10.1     30.0
 USB2 St4 Pat Ext4    0.775    0.776    0.801      6.0      3.6     10.3
 USB3 St5 San Exteme  0.894    0.891    0.871      1.4      1.2      0.8 [3]
 USB3 St6 San Ex Ext4 0.839    0.822    0.845      0.9      0.8      0.8 [3]
 USB3 St7 San Ex F2fs 0.851    0.903    0.896      2.1      3.2      2.3 [3]

 USB3 St6 4 KB Ext4   0.928    0.940    0.950      1.0      1.0      1.0 [4]
 USB3 St7 4 KB F2fs   0.926    0.943    0.946      0.9      0.9      0.9 [4]

 USB3 St6 Cached Ext4 0.024    0.034    0.114     0.03     0.03     0.04 [5]
 USB3 St7 Cached F2fs 0.025    0.021    0.191     0.01     0.01     0.01 [5]

 1 GB file 4 KB from 256, 512, 1024 MB
 USB3 St6 Cached Ext4 1.168    1.137    1.117     0.32     1.07     0.56  [6]
 USB3 St7 Cached F2fs 1.212    1.160    1.149     0.13     0.12     0.14  [6]

 USB2 HD FAT          0.904    1.490    3.879      1.7      2.1      2.3
 USB2 HD Ext4         0.892    1.750    4.250      1.6      2.2      2.4

 Raspberry Pi 2

 SD Main   8 GB       0.389    0.571    0.403      3.5      8.3      3.4
 SD 4A    16 GB       0.656    0.708    0.698      2.2      3.2      2.7
 SD 4B    16 GB       0.807    0.856    0.843      2.8      4.5      2.1

 USB3 St5 32 GB FAT   0.979    0.484    0.481     1.40     1.60     0.61
 USB3 St6 32 GB Ext4  0.415    0.416    0.439     0.69     0.75     0.59

 Linux

 Main HD PC Ext4      0.501    0.385    4.163      1.5      2.5      3.3
 USB2 St1 Linux FAT   0.501    0.498    0.499     91.9     41.5     80.1
 USB3 St5 Linux FAT   0.505    0.501    0.500      0.8      1.0      1.5 [3]
 USB3 St6 Linux Ext4  0.503    0.498    0.499      1.1      0.8      0.6 [3]
 USB3 St7 Linux F2fs  0.602    0.624    0.624      1.8      1.7      1.8 [3]   

 
   Milliseconds per file
                        Write                      Read                   Delete
 File KB                  4        8       16        4        8       16  Seconds
 
 SD Main               5.30     4.15     4.49     0.87     0.94     1.39   0.108
 SD 2                  4.99     6.25     6.39     1.16     1.65     2.23   0.122
 SD 3                  5.83    17.44     8.40     1.37     1.99     2.64   0.105
 SD 4                  2.68     2.58     3.79     0.82     0.94     1.33   0.094

 USB2 St1 FAT         30.75    18.41    25.15     1.09     1.17     1.55   0.100
 USB2 St2 FAT         53.83    40.84    35.26     1.75     1.63     1.98   0.058
 USB2 St3 Pat FAT     15.01    15.71    19.27     1.25     1.46     1.57   0.096
 USB2 St4 Pat Ext4     4.48     4.73     8.72     1.14     1.30     1.61   0.043
 USB3 St5 San Ex FAT   4.95     4.48     4.94     1.01     1.30     1.51   0.445 [7]
 USB3 St6 San Ex Ext4  1.57     1.43     2.02     0.98     1.06     1.32   0.043 [7]
 USB3 St7 San Ex F2fs  1.56     1.51     1.87     0.92     1.05     1.36   0.032 [7]
 

 USB2 HD FAT           8.87     8.20     8.49     1.46     1.37     1.97   0.409
 USB2 HD Ext4          2.86     1.88     2.23     4.43     1.50     1.57   0.109

 Raspberry Pi 2

 SD Main               2.79     2.27     2.72     0.57     0.84     1.25   0.036
 SD 4A                 1.49     2.22     1.20     0.64     0.91     1.14   0.037
 SD 4B                 0.96     1.21     1.74     0.60     0.86     1.29   0.037

 USB3 St5 32 GB FAT    2.39     1.77     9.88     0.42     0.67     3.60   0.043
 USB3 St6 32 GB Ext4   1.02     0.84     2.37     0.71     0.57     0.76   0.025

 Linux

 Main HD PC            1.25     0.24     0.35     0.30     0.29     0.37   0.004
 USB2 St1 Linux FAT   40.85    27.53    37.09     0.60     0.64     0.89   0.004
 USB3 St5 Linux FAT 1 10.49    10.70    10.86     0.53     0.67     0.73   0.004 [7]
 USB3 St5 Linux FAT 2  1.22     1.07     0.96     0.69     0.73     0.76   0.003 [7]
 USB3 St6 Linux Ext4   0.72     0.65     0.90     0.38     0.52     0.76   0.004 [7]
 USB3 St7 Linux F2fs   0.51     0.59     0.51     0.39     0.51     0.40   0.003 [7]

             FAT 1 and FAT 2 Typical variations on this device using FAT
             SD 4A Old USB 2 Hub, SD 4B USB 3 card reader

   


To Start


DriveSpeed F2FS Format

F2FS Flash Friendly File System was created by Samsung to work with Linux, specifically to suit characteristics of such as SSDs and SD cards. Published benchmark results often show that writing performance is superior to using Ext4 format, particularly with random access. Others indicate faster speeds on handling small files.

In order to format a USB Flash drive, a recent version of Linux is required. In my case, Ubuntu 13.10 with Linux 3.12.0 was installed, followed by f2fs-tools. I formatted my SanDisk Extreme USB 3.0 drive, using GParted, with three partitions, FAT, Ext4 and F2fs. The F2fs partition was shown as having an unknown format and did not show using the DF command. However, it could be mounted as shown here. Even then it was not visible in Ubuntu, but the directory path could be accessed by the benchmark (using sudo).

For the Raspberry Pi, I downloaded and installed 2013.12.20 Rasbian with Linux 3.10. This provides support at least for reading and writing F2fs partitions. Initially, the existing F2fs USB drive partition was not visible using the df command but, as the drive had another partition, the Filesystem path could be assumed and mounted. DriveSpeed benchmark was run on the Linux PC and RPi, results being included above under St5, St6 and St7 - See [] references.

Large Files - The three different formats produced the same high speed writing and reading on the Linux PC [1] but with some degradation on writing on the RPi to F2fs and particularly FAT [2].

Random Access - Random reading was slightly slower using F2fs and noticeably the slowest on writing. Again the Linux PC was faster [3]. Random access for the benchmark is via 1 KB block sizes. Using VMSTAT and F2fs, it was found that 4 KB was being read and written for each 1 KB access. Increasing block size to 4 KB avoided the reading and F2fs was slightly faster than Ext4 [4].

Random Access Cached - The benchmark opens the file for random access using Direct I/O, avoiding data being kept in the RAM based cache. Enabling caching produces ridiculously fast response times, with the file sizes used [5] (at 1KB block size).

Random Access Larger Files - The next step was to see what happens with larger files, where up to 1 GB was used [6] (with 4 KB blocks). In this case, random writing times varied considerably with Ext4 (more than shown) but were consistently much faster with F2fs formatting, apparently due to the way in which data stored. The benchmark is supposed to measure speeds over four seconds but, with Ext4, actual time could be much longer, probably due to shuffling the memory after writing was committed.

Small Files - [7] Average writing and reading times of small files could vary quite a bit but, using Ext4 and F2fs, were generally faster than via FAT formatting and F2fs marginally the winner.

Random Accesses Longer Time - Below [8]are further cached results with 4 KB from 256, 512 and 1024 MB, but running for 40 seconds (Ext4 up to 45 seconds), from which the number of transactions executed has been calculated. Other statistics shown were derived from running VMSTAT at the same time.

Ext4 and F2fs response times and system loading are similar on reading. The speed is now much faster reading from 256 MB, with higher CPU utilisation, due to more data being in the RAM based cache. KB per transaction numbers represent data read over the USB and this can be larger than the 4 KB data requests.

Writing response times are a little slower than with 4 second tests but more consistent with Ext4. The most important observation is that F2fs is still remarkably fast, transferring data over USB at near maximum speed and with high CPU utilisation organising the data.

 
                         4 KB Random Access Over 40 Seconds [8]

                              Read                       Write
 From MB                       256      512     1024      256      512     1024
     
 USB3 St6 Cached Ext4 msecs  0.099    0.617    0.967     0.80     1.35     1.26
       Transactions x 1000     404       65       41       53       33       33
       Million Bytes           263      314      324      151      139      115
       KB per transaction      0.7      4.8      7.8      2.9      4.2      3.4
       MB per second           6.6      7.8      8.1      3.6      3.1      2.7
       CPU Utilisation         64%      49%      49%      33%      27%      30%
        
 USB3 St7 Cached F2fs msecs  0.107    0.636    0.997     0.14     0.17     0.18
       Transactions x 1000     374       63       40      286      235      222
       Million Bytes           262      310      318      945      885      833
       KB per transaction      0.7      4.9      7.9      3.3      3.8      3.7
       MB per second           6.6      7.7      7.9     23.6     22.1     20.8
       CPU Utilisation         62%      50%      49%      95%      91%      92%

   


To Start


Copying F2FS Files

Performance investigation of USB drives formatted with F2fs, compared with Ext4, were prompted by reports in XBMC Community Forum that copying files to the former was up to nine times faster than to the same drive formatted as Ext4. The particular page is not now directly available but might still be found by Googling for “OpenELEC Testbuilds for RaspberryPi Part 2” 2013-12-19 20:03 (was page 199, later 133). DriveSpeed benchmark did not demonstrate this level of performance gain, except during an extended period of random writing. Now, copying files is likely to involve normal reading and writing, transferring data via a RAM based file cache.

DriveSpeed measures speed with caching enabled, but for larger files. A modified caching version was produced using a large number of small files of increasing sizes where, unlike copying, writing precedes reading. Average results of three tests are shown below [9]. F2fs is faster using smaller files, but not that much, with the position reversed as file sizes are increased. Data transfer speed in MBytes per second is provided [10], to demonstrate caching, where USB speed is exceeded (like > 30 MB/second). Data was not cached, starting at 256 MB, half RAM size.

Next stage involved producing a series of directories, with average file sizes between 6 KB and 500 KB, occupying >100 MB (similar to sizes quoted in XBMC Forum). Results, below [11] show that, still using the SanDisk Extreme USB 3.0 drive, F2fs is a little faster at the larger file sizes, but the position is reversed at reducing file sizes. Most significant is at 6 KB, where Ext4 is 70% faster, with the du command reporting 178 MB, compared with 269 MB with F2fs. VMSTAT recorded MegaBytes written, read, memory used and cache space are also shown for this test, confirming at least these volumes. Windows identified total file size and disk space used under NTFS are also shown. For comparison purposes, calculated MB/second speeds are based on the former.

I installed XBMC Media Center, on a Windows based PC, to produce a Thumbnails directory from photographs, included in the mix, in case there was something special about them. The directory comprised 4370 JPG files at around 34 KB average size, occupying 161 MB with Ext4 and 178 MB under F2fs, the former being slightly faster. These directories were also copied, using two other USB sticks, via the Raspberry Pi and a Linux based PC (plus limited tests with FAT formatting). Linux was faster on all, and the other drives were slower than the Extreme, but there were no significant variations between Ext4 and F2fs formatting. Results are again shown below.

XBMC for the Raspberry Pi is part of OpenElec (Open Embedded Linux Entertainment Center). I installed various versions of this on SD cards and ran DriveSpeed benchmark and file copying tests, booted to OpenElec. Details are in Raspberry Pi OpenElec Benchmarks.htm.

 
 [9] DriveSpeed 1000 small files, cached, average milliseconds per file, Extreme Drive

 File KB         4       8      16      32      64     128     256     512    1024
 F2FS
 Write        0.35    0.32    0.45    0.63    1.50    3.40    9.67   20.49   40.97
 Read         0.09    0.12    0.18    0.28    0.69    1.72   13.00   23.29   43.94
 
 Ext4
 Write        0.46    0.48    0.60    1.07    2.33    5.28   10.21   20.29   43.40
 Read         0.12    0.16    0.21    0.33    0.63    1.43   11.80   21.33   43.75
 
 [10] F2FS MB/second
 Write        11.5    25.0    35.8    51.1    42.8    37.6    26.5    25.0    25.0
 Read         44.4    68.6    88.9   112.9    92.8    74.6    19.7    22.0    23.3

 
 Copying command and results format
 
 time sh -c "cp -r /source  /destination && sync" 
 real	0m35.851s
 user	0m0.420s
 sys	0m7.420s

 
 [11] Copying Six Different Directories Extreme Drive

                                                                     Based on Win MB
              Win KB     Win  Win on    F2FS    Ext4    F2FS    Ext4    F2FS    Ext4
 Set   Files   /file      MB   Drive   du MB   du MB    Secs    Secs  MB/sec  MB/sec
 
   1   22945       6     129     173     269 xxx 178   106.9    63.0     1.2     2.0
   2   12974      11     140     171     227     176    66.5    57.0     2.1     2.5
   3    7118      23     161     179     212     184    47.2    39.1     3.4     4.1
  4T    4370      34     148     156     178     161    35.9    30.0     4.1     4.9
   5     932     107     100     102     109     105    14.9    18.0     6.7     5.6
   6     959     492     472     474     466     462    46.2    51.6    10.2     9.1

         xxx vmstat MB F2FS Read 272 Write 277, Ext4 Read 184 Write 223
         xxx vmstat MB F2FS RAM  298 Cache 288, Ext4 RAM  286 Cache 248


    XBMC Thumbnails 4T - 4370 Files 148 MB

               F2FS                    Ext4                     FAT
    Drive      Elap     CPU  MB/sec    Elap     CPU  MB/sec    Elap     CPU  MB/sec
               Secs    Secs            Secs    Secs            Secs    Secs

    Rpi
    Extreme     35.9     7.8     4.1    30.0     8.2     4.9    64.9    19.9     2.3
    Attache     75.3     7.8     2.0    75.4     9.2     2.0
    Cruzer     118.9     7.9     1.2   103.7     9.2     1.4

    Linux
    Extreme     26.9     0.8     5.5    26.8     0.8     5.5    55.7     1.8     2.7
    Attache     54.7     0.7     2.7    65.0     0.7     2.3
    Cruzer      98.6     0.7     1.5    86.4     0.8     1.7

   


To Start


LAN/WiFi Benchmark - LanSpeed

This is mainly the same as the DriveSpeed benchmark, described above. The exception is that the cached data test is not possible and the open file options to avoid caching produce run time errors. The benchmark and source code are again in Raspberry_Pi_Benchmarks.zip. Tests carried out are:

Test 1 - Write and read three 8 and 16 MB; Results given in MBytes/second
Test 2 - Random write and read 1 KB from 4 to 16 MB; Results are Average time in milliseconds
Test 3 - Write and read 200 files 4 KB to 16 KB; Results in MB/sec, msecs/file and delete seconds.

The benchmark can measure performance communicating to both Windows and Linux via Local Area Network (LAN), including a wireless connection, in my case via a Windows Workgroup. The first step is to set up a directory on the Raspberry Pi to mirror the remote sharable data, in my case /public in /media. Then, a directory on the remote system is useful, in my case /test.

The second step is to obtain the Internet Protocol (IP) address of remote PCs - in my case this is dynamic, variable not constant. The appropriate commands are shown below, followed by those for the third step to mount the sharable drive, partition or directory.

The benchmark can be run in three ways with LAN involvement, firstly with the Terminal pointing to the directory on the RPi containing the benchmark and a FilePath parameter /media/public/test (in my case). The second method requires a copy of LanSpeed in /media/public/test with Terminal pointing to that source. The final method uses the remote copy but just loads the benchmark and uses the home (or whichever) folder for writing and reading files, with no LAN activity. As with DriveSpeed, a run time parameter can also specify minimum size for the large file tests (example ./LanSpeed MB 32 for 32 and 64 MB).


 Create new folder command - sudo mkdir /media/public

 NOTE: there should be no spaces after commas with multiple -o options

 Windows Command Prompt ipconfig command = 192.168.0.2
 Windows share drive (partition) d 
 sudo mount -t cifs -o dir_mode=0777,file_mode=0777 //192.168.0.2/d /media/public
 can also add -o password=pi - in this case unchanged default password

 Linux Terminal command ifconfig eth0 (or eth1) = 192.168.0.3
 Linux Wireless Connection Information          = 192.168.0.4
 Linux share directory all
 sudo mount -t cifs -o user=UU,password=PP //192.168.0.3/all /media/public
  UU and PP are IDs for Linux system, -o dir_mode=0777,file_mode=0777 not needed
 NOTE: If wrong IDs are used, a locked file will be generated and this leads to a
 failure to open a new file when correct IDs are used. The file must be deleted.

 Benchmark and log on Raspberry Pi
 pi@raspberrypi ~/benchmarks/lanspeed $ ./LanSpeed FilePath /media/public/test

 Benchmark and log on remote system
 pi@raspberrypi /media/public/test $ ./LanSpeed

 Benchmark remote, data and log /home/pi - does not use LAN
 pi@raspberrypi ~ $ /media/public/test/LanSpeed

 sudo umount //192.168.0.2/d or //192.168.0.3/all

  


To Start


LAN/WiFi Benchmark - More

The Raspberry Pi LAN speed is rated at 100 Mbps, whereby maximum data transfer speed will be less than 12.5 MB/second, due to overheads. See the example results below. The overheads also lead to the fairly constant average time to write and read small files. See Raspberry Pi 2 results in comparisons, below.


 #####################################################

   LanSpeed RasPi 1.0 Tue Jul  2 10:56:28 2013
 
 Current Directory Path: /media/public/test
 Total MB  230000, Free MB   85052, Used MB  144948

                        MBytes/Second
  MB   Write1   Write2   Write3    Read1    Read2    Read3

   8     7.49     5.84     8.13    11.56     9.04    11.57
  16     7.29     8.13     6.78    11.53    11.60    11.58

 Random         Read                       Write
 From MB        4        8       16        4        8       16
 msecs      0.011    2.272    1.651     3.40     4.12     4.17

 200 Files      Write                      Read                  Delete
 File KB        4        8       16        4        8       16     secs
 MB/sec      0.62     1.20     2.13     1.05     1.62     2.74
 ms/file     6.63     6.83     7.68     3.88     5.07     5.98    0.280

                End of test Tue Jul  2 10:57:12 2013

   

Intel Linux and Windows Versions - LanSpdx86Lin, LanSpdx86Win.exe

Versions to run on Intel processors via Linux and Windows have been produced. The former was compiled from the supplied lanspeed.c code, but with a different version string for printing. The Windows version has some slight changes, inherited from an earlier benchmark. The execution files are included in the zip file. They can be run from the host PC or stored on the RPi drive and executed via the LAN.

In order for Windows Workgroup systems to access RPi files, samba and samba-common-bin need to be installed, along with changes to /etc/samba/smb.conf. Detailed procedures are in Treating Raspberry Pi as just another Windows machine.

The remote Pi can be made visible using Windows “Map network drive" - T: on my PC). The Raspberry Pi user name and password need to be entered (I seem to have changed the password from raspberry to pi, so mine in pi and pi). The benchmark can the be run from a Windows Command prompt in two ways, as shown below, where LanSpdx86Win.exe is in folder D:\WinDDK\32bit\lanspeed. The .exe file can also be copied to a folder on the Pi, the folder selected in Windows and the benchmark run by double clicking on the .exe file. The log will be saved in the same folder.

The rather convoluted mount command, shown below, is needed to run from Linux. The benchmark (in my case from roy@roy-64Bit:~/all/lanspeed$) can be run from a Terminal command, also shown below. The program can be saved on the RPi (in /media/public/lanspeed). I also copied a script file, runlan86, with the command "./LanSpdx86Lin" and execution permission set. This can be run by clicking on the script file, where output is on the Linux Terminal display.


  Windows
  D:\WinDDK\32bit\lanspeed>LanSpdx86Win FilePath T:\test
  D:\WinDDK\32bit\lanspeed>LanSpdx86Win FilePath \\MYPI\pi\test

  Raspberry Pi
  ifconfig eth0 = 192.168.0.8

  Linux Ubuntu 10.10 using smbfs - mount all on one line
  sudo mount -t smbfs -o user=pi,password=pi,dir_mode=0777,file_mode=0777 
  //192.168.0.8/public/home/pi/benchmarks /media/public 

  Linux run command - ./LanSpdx86Lin FilePath /media/public/lanspeed

   


To Start


LAN/WiFi Comparison

The results log files identify the system running the tests in Configuration Details with somewhat different variations using Windows and Linux. The destination system can normally be identified from logged Current Directory Path and Total MB (drive capacity).

The first four results are for the RPi handling data to/from Windows and Linux, then as destination from/to the two PCs. It should be noted that there can be significant performance differences depending on which system is the source or destination.

The next two sets of results are from RPi to a laptop via WiFi, showing the reduction in speed when the laptop is some distance from the router. These are followed by a test not using the LAN, but with RPi accessing the local drive, as DriveSpeed above, where data is cached in RAM.

The last four results are using a Gigabit LAN, again with wide variations in performance depending on the configuration used.

Some LanSpeed Raspberry Pi 2 results are included. Running this, accessing a Windows PC, appeared to produce more consistent high reading and writing times for the large files, at over 11 MB/second (demonstrating 100 Mbps LAN), compared with the original RPi. Running LanSpdx86Win.exe, stored on the SD drive, demonstrated some improvement.


 Source  Dest                             MBytes/Second
 CPU     CPU/drive     MB  Write1  Write2  Write3   Read1   Read2   Read3

 Rpi     Ph Win        16    7.29    8.13    6.78   11.53   11.60   11.58
 Ph Win  Rpi           16   11.29   11.18   10.70    4.22    2.70    1.97
 Rpi 2   Ph Win        16   11.31   11.32   11.32   11.65   10.80   11.65
 Ph Win  RPi 2         16   11.51   11.53   11.49    5.33    3.47    2.57
 Rpi     C2 Lin        16    7.79    7.52    7.84   11.62   11.61   11.66
 C2 Lin  Rpi           16    6.53    6.36    6.23    5.58    5.49    6.01
 Rpi     LT Lin        16    3.23    3.24    3.20    3.59    3.50    3.50 WiFi
 Rpi     LT Lin        16    1.78    1.62    1.00    0.92    0.89    0.39 WiFi outside
 Rpi     Rpi           16   57.41   60.05   50.00  155.48  152.66  155.89 cached
 C2 Lin  Ph Win        16   57.76   54.31   55.02   33.82   31.91   32.13 1Gbps
 Ph Win  C2 Lin        16  108.62   89.62  109.83   36.45   22.09   15.30 write later
 Ph Win  C2 Win        16   29.19   38.20   38.18   21.48   14.95   11.59 1Gbps
 C2 Win  Ph Win        16   72.36   68.46   50.16   25.96   18.76   12.71 1Gbps


 Random msecs        Read                   Write
 From MB                4       8      16       4       8      16

 Rpi     Ph Win     0.011   2.272   1.651    3.40    4.12    4.17
 Ph Win  Rpi        1.299   1.208   1.275    1.29    1.37    1.28
 Rpi 2   Ph Win     0.124   0.911   0.998    1.96    1.56    1.68
 Ph Win  RPi 2      0.722   0.699   0.688    0.73    0.73    0.73
 Rpi     C2 Lin     0.637   2.160   0.872    2.42    2.14    2.15
 C2 Lin  Rpi        1.820   0.978   1.259    3.05    2.49    2.45
 Rpi     LT Lin     4.520   5.391   3.234    4.08    3.22    3.16         WiFi
 Rpi     LT Lin    10.264  11.906  11.107    5.16    4.08    4.29         WiFi outside
 Rpi     Rpi        0.012   0.012   0.012   23.03   24.69   25.01         cached
 C2 Lin  Ph Win     0.001   0.002   0.002    1.79    2.04    1.77         1Gbps
 Ph Win  C2 Lin     0.556   0.468   0.423    0.43    0.43    0.43         write later
 Ph Win  C2 Win     0.846   0.875   5.553    1.13    2.41    2.88         1Gbps
 C2 Win  Ph Win     0.613   0.585   0.583    0.88    1.24    1.37         1Gbps

 
                               milliseconds per file             
 200 Files          Write                    Read                  Delete
 File KB                4       8      16       4       8      16    secs

 Rpi     Ph Win      6.63    6.83    7.68    3.88    5.07    5.98    0.28
 Ph Win  Rpi        14.15   14.21   15.76   10.32   10.52   11.47    1.79
 Rpi 2   Ph Win      3.92    4.31    5.08    2.33    2.65    3.48    0.15
 Ph Win  RPi 2       7.78    8.33    9.54    4.84    5.31    5.96    0.74
 Rpi     C2 Lin      5.74    6.83    8.96    4.87    5.97    6.74    0.60
 C2 Lin  Rpi         9.87   10.55   11.73    7.13    7.52    8.44    1.30
 Rpi     LT Lin      9.79   10.81   13.34    7.69    8.95   11.53    1.07 WiFi
 Rpi     LT Lin     12.26   16.08   18.94    9.30   12.53   15.09    1.54 WiFi outside
 Rpi     Rpi         0.87    0.73    0.67    0.08    0.15    0.19    0.05 cached
 C2 Lin  Ph Win      2.57    2.52    2.61    0.85    0.83    0.86    0.13 1Gbps
 Ph Win  C2 Lin      3.72    3.58    3.60    3.20    3.22    3.31    0.52 write later
 Ph Win  C2 Win      4.92    3.46    3.50    3.22    3.09    3.42    0.40 1Gbps
 C2 Win  Ph Win      3.10    3.12    3.19    3.99    2.93    2.73    0.46 1Gbps

 Ph Win = Phenom Windows 7    C2 Win = Core 2 Vista   C2 Lin = Core 2 Ubuntu 10.1
 LT Lin = Laptop Ubuntu 10.1  RPi    = Raspberry Pi

   


To Start


Single Core NEON Benchmarks

Some of these are essentially the same as my Android NEON Benchmarks.htm, using NEON Intrinsic Functions. Others are produced by including the compile option -funsafe-math-optimizations, alongside -mfpu=neon-vfpv4. Results for single core NEON benchmarks are included in this document, with the programs and source codes in Raspberry_Pi_Benchmarks.zip. For MultiThreading versions, see Raspberry Pi Multithreading Benchmarks.htm. and Raspberry_Pi_MP_Benchmarks.zip.

Linpack NEON Benchmarks - linpackPiNEONi and linpackPiFSSP

The Android version was written, using NEON Intrinsic Functions and was converted to Linux format in linpackneon.c, compiled as LinpackPiNEONi. The standard Linux single precision version was recompiled with the additional -funsave parameter as linpackPiFSSP. Comparative performance of the intrinsic program is shown above.

Linpack benchmark performance is mainly determined by the daxpy function, specifically an unrolled loop with four dy[i] = dy[i] + da * dx[i] statements, accessing sequential data. NEON q registers are 128 bits or four words and there are multiply and add instructions, using three registers. The assembly code loop has two loads and one store, with linpackPiNEONi using vmla Vector Multiply Accumulate instruction and linpackPiFSSP using the faster vfma Fused Multiply Accumulate - one instruction for 4 multiplies and 4 adds.

These instructions are known to produce rounding complications, differences in results being shown below. I could not say whether they are acceptable

                        linpackPiNEONi        linpackPiFSSP
                       
  MFLOPS at  900 MHz         300                   311 
  MFLOPS at 1000 MHz         334                   348

  NEON Function      vmla.f32 q8, q9, q10   vfma.f32 q8, q9, q10

                norm resid    resid           x[0]-1           x[n-1]-1

 Pi, Android+NEON   1.6   3.80277634e-05  -1.38282776e-05  -7.51018524e-06
 Pi 2 Not NEON      2.0   4.69621336E-05  -1.31130219E-05  -1.30534172E-05
 Pi 2 Intrinsic     2.2   5.16722466e-05  -2.38418579e-07  -5.06639481e-06  
 Pi 2 Compiled      1.9   4.62468779e-05  -1.31130219e-05  -1.30534172e-05
   


To Start


NEON Float & Integer Benchmark - NeonSpeed

This was the first benchmark produced to measure speed using NEON instructions on ARM v7 CPUs using Android. It executes some of the code used in Memory Speed Benchmark, with additional tests recoded using NEON intrinsic functions. The benchmark and source code are included in Raspberry_Pi_Benchmarks.zip.

The compile command (for gcc 4.8) is shown below, where the -funsafe-math-optimizations option leads to the compiler generating NEON code for normal floating point statements. In this case, vfma Fused Multiply Accumulate instructions were generated, as opposed to vmla Vector Multiply Accumulate from the intrinsic functions. Then, vadd.i32 was produced for all integer tests. In this case, performance from both methods was quite similar.

An example Android results log is also provided, to show the difference where compiled NEON instructions are not provided.


  gcc  neonspeed.c cpuidc.c -lm -lrt -O3 -mcpu=cortex-a7 -mfloat-abi=hard 
      -mfpu=neon-vfpv4 -funsafe-math-optimizations -o NeonSpeed 

 ##############################################

 Raspberry Pi 2 CPU 900 MHz, Core 250 MHz, SDRAM 450 MHz

  NEON Speed Test V 1.0 Tue Mar 17 12:06:58 2015

       Vector Reading Speed in MBytes/Second
  Memory  Float v=v+s*v  Int v=v+v+s   Neon v=v+v
  KBytes   Norm   Neon   Norm   Neon  Float    Int

      16   1914   1978   2049   2293   2341   2797 L1
      32   1897   1951   2032   2253   2310   2745
      64   1517   1543   1619   1694   1718   1915 L2
     128   1417   1435   1510   1569   1594   1791
     256   1414   1433   1499   1571   1593   1771
     512    680    578    654    600    577    604
    1024    434    403    451    414    396    409 RAM
    4096    327    328    332    324    324    330
   16384    333    334    338    345    330    337
   65536    339    336    340    172    331    338

Max MFLOPS  479    495
Max MOPS                  512    573

##################### OC ######################

Raspberry Pi 2 CPU 1000 MHz, Core 500 MHz, SDRAM 500 MHz,
 over_voltage=2

  NEON Speed Test V 1.0 Tue Mar 17 12:12:37 2015

       Vector Reading Speed in MBytes/Second
  Memory  Float v=v+s*v  Int v=v+v+s   Neon v=v+v
  KBytes   Norm   Neon   Norm   Neon  Float    Int

      16   2114   2183   2265   2531   2587   3090 L1
      32   2078   2134   2228   2461   2532   3003
      64   1673   1703   1785   1870   1900   2118 L2
     128   1565   1581   1668   1736   1761   1974
     256   1545   1577   1660   1726   1752   1951
     512   1055   1042   1100   1121   1101   1178
    1024    499    506    523    525    512    530 RAM
    4096    429    431    440    428    433    445
   16384    436    438    448    453    440    454
   65536    446    443    452    229    444    458

Max MFLOPS  529    546
Max MOPS                  566    633

       End of test Tue Mar 17 12:12:57 2015

################### Android ####################

  Nexus 7 Quad 1200 MHz Cortex-A9, Android 4.1.2

   Android NeonSpeed Benchmark 15-Dec-2012 14.38

       Vector Reading Speed in MBytes/Second
  Memory  Float v=v+s*v  Int v=v+v+s   Neon v=v+v
  KBytes   Norm   Neon   Norm   Neon  Float    Int

      16    860   2575   2325   2918   3053   3245 L1
      32    950   2551   2400   2823   2944   3131
      64    744   1396   1329   1434   1465   1496 L2
     128    713   1342   1319   1365   1392   1417
     256    714   1339   1311   1357   1377   1400
     512    708   1323   1299   1348   1358   1383
    1024    608    875    869    917    930    952
    4096    460    493    492    481    488    504 RAM
   16384    460    498    487    507    506    504
   65536    459    495    469    251    503    505

Max MFLOPS  238    644
Max MOPS                  600    730
   


To Start


MemSpeed NEON - memSpdPiNEON

This is compiled from the Memory Speed Benchmark source code, using the -funsafe-math-optimizations additional compile parameter. An example of results in included above. The memspeedPiA7 benchmarks, compiled with the -mfpu=neon-vfpv4 option, generated NEON instructions for integer arithmetic (vadd.i32 q8, q8, q10), as with memSpdPiNEON. leading to the same performance. Then four scalar fused multiply and add instructions ( fadds s12, s8, s12) were generated for the single precision (SP) floating point test, as opposed to NEON (vfma.f32 q8, q9, q6) with the new benchmark, with similar differences for the second set of calculations. Details are above, and maximum MFLOPS below. showing a gain of approaching 50% through using NEON instructions. Note: currently NEON floating point functions are only available at single precision. For reference, double precision (DP) results are also shown.

Both compilations for memspeedPiA7 and memSpdPiNEON have NEON integer instructions of the form vadd.i32 q8, q8, q9, providing significant performance gains, as shown by integer MOPS below.

                           memspeedPiA6   memspeedPiA7   memSpdPiNEON
                      
  SP MFLOPS at  900 MHz        333            299             445 
  SP MFLOPS at 1000 MHz        351            330             493
  DP MFLOPS at 1000 MHz        148            193             193

                           memspeedPiA6   memspeedPiA7   memSpdPiNEON

  Int MOPS  at 1000 MHz        333            566             562    
   


To Start


Maximum One Core Single Precision MFLOPS from notOpenMP-MFLOPS

This uses the same calculations as a number for maximum MFLOPS benchmarks, see MultiThreading Benchmarks. The program uses the same source code as OpenMP-MFLOPS, except options to use OpenMP multithreading are not used. Arithmetic operations executed are of the form x[i] = (x[i] + a) * b - (x[i] + c) * d + (x[i] + e) * f with 2, 8 or 32 operations per input data word. Maximum speeds obtained are below. A variety of NEON vadd.f32, vmul.f32, vfma.f32 and vfms.32 instructions are used in four way unrolled loops. Speed at 32 operations per word is reduced due to an excessive number of load instructions (not enough registers for 24 constants?).

                            2 Ops/word     8 Ops/word     32 Ops/word
 
  SP MFLOPS at  900 MHz        398            777             692
  SP MFLOPS at 1000 MHz        461            861             765    
   


To Start


MultiThreading Benchmarks

These are essentially the same as my Android Multithreading Benchmarks.htm, except the latter use Java to display results. The new ones use C printf to display results on a Terminal and fprintf to save a results log file on the Raspberry Pi drive. All run the benchmarks using 1, 2, 4 and 8 threads. Those that use caches and RAM have data sizes around 12.8 KB, 128 KB and 12.8 MB. Further details and results can be found in Raspberry Pi Multithreading Benchmarks.htm. Quad core Raspberry Pi 2 results are now included, showing performance gains up to 25.6 times, compared with original Raspberry Pi. The benchmarks and source codes are available in Raspberry_Pi_MP_Benchmarks.zip. The zip file also includes identical code compiled for Intel compatible processors running under Linux.

MP-MFLOPS - measures floating point speed on data from caches and RAM. The first calculations are as used in MemSpeed. Others use more calculations on each data word. Each thread carries out the same calculations but accesses different segments of the data. The result, on cache based calculations, is often performance proportional to the number of cores used.

MP-Whetstone - Multiple threads each run the eight test functions at the same time, but with some dedicated variables. Measured speed is based on the last thread to finish, with Mutex functions, used to avoid the updating conflict by only allowing one thread at a time to access common data. Again performance is generally proportional to the number of cores used. There can be some significant differences from the single CPU Whetstone benchmark results on particular tests due to a different compiler being used.

MP-Dhrystone - This runs multiple copies of the whole program at the same time. Dedicated data arrays are used for each thread but there are numerous other variables that are shared. The latter reduces performance gains via multiple threads and, in some cases, these can be slower than using a single thread.

MP-BusSpeed - This runs integer read only tests using caches and RAM, each thread accessing the same data sequentially. To start with, data is read with large address increments to demonstrate burst data transfers. Performance gains, using L1 cache, can be proportional to the number of cores, but not quite so using L2. The program is designed to produce maximum throughput over buses and demonstrates the fastest RAM speeds using multiple cores.

MP-RandMem - The benchmark has cache and RAM read only and read/write tests using sequential and random access, each thread accessing the same data but starting at different points. It uses the Mutex functions as in Whetstone above, sometimes leading to no performance gains using multiple threads. Random access is also demonstrated as being relatively slow where burst data transfers are involved.

OpenMP-MFLOPS The benchmark uses the same source code program calculations as the original MP_MFLOPS benchmark for Linux with MP-MFLOPS above using a cut down version, implemented to use on Android devices. OpenMP-MFLOPS benchmark uses the simplest OpenMP directive, #pragma omp parallel for, before the for loops where parallelisation might be expected, and a -fopenmp compile parameter. Also, notOpenMP-MFLOPS is the same, without the compile parameter.

OpenMP-MemSpeed This is the same as Memory Speed Benchmark but with measurements extending to test more memory, also using the OpenMP directive and compile parameter.

To Start


NEON MultiThreading Benchmarks

These are also in Raspberry_Pi_MP_Benchmarks.zip with details and results in Raspberry Pi Multithreading Benchmarks.htm.

MP-NeonMFLOPS - This is the same as MP-MFLOPS, except the calculations are carried out using NEON intrinsic functions.

linpackNeonMP - The original Linpack benchmark for Raspberry Pi, operates on double precision floating point 100x100 matrices (N = 100). This version uses mainly the same C programming code as the single precision floating point NEON compilation. It is run run on 100x100, 500x500 and 1000x1000 matrices using 0, 1, 2 and 4 separate threads. The 0 thread procedures are identical to those in the single core 100 x 100 NEON compilation, using NEON intrinsic functions. The benchmark was produced to demonstrate that the original Linpack 100x100 code could not be converted (by me) to show increased performance using multiple threads. The official line is that users are allowed to use their own linear equation solver for this purpose.

To Start


FFT Benchmarks

In 2000, I provided optimised code for a Fast Fourier Transform program, resulting in a series of Windows benchmarks that provided graphical output - see fftgraf results.htm. The fastest one used SSE type assembly code that modern compilers can also produce. The new versions use all C code, with identical calculations compiled to run via Linux, Windows and Android. The benchmarks and source codes are in FFT Benchmarks.zip with further details and results from PCs, Android devices and RPi 2 in FFTBenchmarks.htm.

There are two benchmarks, FFT1, the original, and FFT3c, optimised, with 32 bit and 64 bit versions, when appropriate. Performance is measured in milliseconds, for FFTs sized 1K to 1024K, with three measurements using both single and double precision floating point data, plus some sumchecks for the largest ones. Results from a Raspberry Pi 2, at 900 MHz, are below. These are similar to a year 2000 Pentium III PC.


   RPi2 FFT 32 Bit Benchmark Version 1.0 Thu Sep  3 12:43:42 2015

  Size                     milliseconds
    K     Single Precision              Double Precision
    1     0.309     0.305     0.307     0.364     0.356     0.355
    2     0.666     0.673     0.680     0.928     0.912     0.900
    4     1.734     1.706     1.602     2.508     2.424     2.414
    8     4.406     2.953     3.032     3.851     3.673     3.655
   16     6.978     6.763     6.682     9.544     9.344     9.148
   32    16.150    15.686    15.932    36.384    37.315    35.933
   64    56.813    57.976    56.969   130.955   130.558   133.657
  128   243.766   243.613   243.433   347.136   347.122   346.882
  256   667.865   667.434   667.765   809.323   808.141   808.501
  512  1553.446  1553.409  1552.408  1716.228  1715.446  1715.344
 1024  3221.497  3220.445  3221.042  3739.268  3739.407  3739.764

        1024 Square Check Maximum Noise Average Noise
        SP   9.999520e-01  3.346483e-06  4.565234e-11
        DP   1.000000e+00  1.133294e-23  1.428110e-28

               End at Thu Sep  3 12:44:49 2015

 ###################################################

   RPi2 FFT 32 Bit Benchmark Version 3c.0 Thu Sep  3 12:28:56 2015

  Size                     milliseconds
    K     Single Precision              Double Precision
    1     0.393     0.349     0.348     0.253     0.237     0.283
    2     0.820     0.781     0.802     0.562     0.551     0.552
    4     1.946     1.821     1.767     1.340     1.297     1.296
    8     4.231     4.018     4.190     3.126     3.068     3.000
   16     6.918     6.380     6.355     8.866     8.679     8.712
   32    15.927    15.502    15.498    22.995    23.236    23.116
   64    40.681    40.704    40.340    56.488    56.372    56.186
  128    96.423    95.908    96.286   126.050   125.948   125.872
  256   214.466   212.873   213.311   272.519   272.652   272.925
  512   460.032   456.701   456.707   589.487   587.888   588.012
 1024   995.475   987.867   988.636  1292.931  1279.485  1278.538

        1024 Square Check Maximum Noise Average Noise
        SP   9.999520e-01  3.346483e-06  4.565234e-11
        DP   1.000000e+00  1.133294e-23  1.428110e-28

               End at Thu Sep  3 12:29:37 2015
   

To Start


Temperature Recorder - RPiTemperature - Later RPiHeatMHz For RPi 2

RPiTemperature has been replaced by RPiHeatMHz, to measure and log CPU MHz besides CPU temperature. The program is included in Raspberry_Pi_Benchmarks.zip. This uses data from the following to display and log results (see RPiHeatMHz.c in zip file):

                   /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq  
                   /opt/vc/bin/vcgencmd measure_temp
  
Run time parameters specify number of samples and interval - see below. Default is 10 samples with 1 second delay between samples. System settings on booting are also included.


      Command - ./RPiTemperature passes 5, seconds 2

 Temperature Measurement - Start at Tue Jun 18 11:57:19 2013

          Using 5 samples at 2 second intervals

 Seconds
    0.0  temp=50.8'C
    2.0  temp=50.8'C
    4.1  temp=50.8'C
    6.1  temp=51.4'C
    8.2  temp=50.8'C
   10.2  temp=50.8'C

 Temperature Measurement - End at   Tue Jun 18 11:57:29 2013

 ##########################################################

 New Command - ./RPiHeatMHz passes 5, seconds 2
 Switches to 900 MHz whilst running CPU benchmark

 Temperature and CPU MHz Measurement

 Start at Sun Mar  1 07:14:19 2015

 Using 5 samples at 2 second intervals

 Boot Settings

 arm_freq=900
 hdmi_force_hotplug=1
 config_hdmi_boost=4
 overscan_left=24
 overscan_right=24
 overscan_top=16
 overscan_bottom=16
 disable_overscan=0
 core_freq=250
 sdram_freq=450
 over_voltage=0

 Seconds
    0.0   600 MHz  temp=44.4'C
    2.0   600 MHz  temp=44.4'C
    4.1   600 MHz  temp=44.4'C
    6.1   600 MHz  temp=44.4'C
    8.2   600 MHz  temp=43.9'C
   10.3   600 MHz  temp=44.4'C

 End at   Sun Mar  1 07:14:30 2015


To Start


Reliability Tests

Following are example results from running the modified OpenGL ES Benchmark and Livermore Loops Stability Test in reliability testing mode. The tests comprised running the OpenGL functions, then these plus the Loops program, both at normal (700 MHz) and overclocked CPU settings (CPU 1000 MHz, Core 500 MHz, SDRAM 600 MHz, 6 volts), measuring temperatures with RPiTemperature. The temperature recordings were at 30 second intervals with 36 samples, started first. With both test programs, the Livermore Loops were started next, at 10 seconds per test (24 x 3 x 10 = 720 seconds) but runs for longer due to early calibration. Finally, a full screen OpenGL test was started with a 15 minute setting (approximate, adjusted to 16 tests at 57 seconds).

On running just the OpenGL tests, FPS speed was virtually the same at 700 and 1000 MHz and only 7.5% slower running the Livermore Loops at the same time, at 700 MHz. As indicated earlier, OpenGL CPU utilisation was about 50%, leading to the Loops recording around half speed, when run at the same time.

Recorded temperatures for all tests are shown below, where room temperature was 23°C and the CPU allowed to cool down between tests. At 700 MHz, adding the Loops lead to a slightly faster temperature increase, but ending only about 3°C higher at 69.7°C. At 1000 MHz, and just OpenGL, maximum temperature was 69.1°C.

Repeating earlier observations, with hotter room temperature, the overclocked tests failed on running OpenGL and Livermore Loops tests at the same time. This time, the OpenGL program terminated with an “Illegal Instruction” after 75 seconds and the display froze on restarting, after a short delay. Temperatures were recorded at 15 second intervals, reaching 72.9°C.

Further reliability test programs have been produced. See Raspberry Pi Stress Tests.htm and Raspberry Pi 2 Stress Tests.htm. The latter includes some using the new OpenGL GLUT Benchmark.


 ######################################################################
 Command - ./OpenGL1Pi.bin Wide 1920, High 1080, RunMinutes 15

 Raspberry Pi OpenGL ES Benchmark 1.1, Fri Jun 21 10:41:01 2013

 Reliability Mode 16 Tests of 57 Seconds

           --------- Frames Per Second --------
 Triangles            All Textured

  36000+       5.28     5.30     5.01     5.37
  36000+       5.37     5.51     5.78     5.80
  36000+       5.75     5.47     5.54     5.32
  36000+       5.29     5.30     5.42     5.91

      Screen Pixels 1920 Wide 1080 High

      End Time Fri Jun 21 10:56:17 2013


 ######################################################################
 Command - ./liverloopsPiA6 Seconds 10

 Livermore Loops Benchmark Opt 3 32 Bit via C/C++ Fri Jun 21 10:40:49 2013

 Reliability test  10 seconds each loop x 24 x 3

 Part 1 of 3 start at Fri Jun 21 10:40:49 2013
 Part 2 of 3 start at Fri Jun 21 10:48:21 2013
 Part 3 of 3 start at Fri Jun 21 10:52:31 2013

 Numeric results were as expected

 MFLOPS for 24 loops
   59.4   65.6   97.1   81.5    9.0   13.7   55.2   72.0   41.9   19.9   17.0   12.3
   10.1    6.9   26.6   34.4   55.1   19.9   44.6   18.9   11.3   13.6   30.1   14.2

 Overall Ratings
 Maximum Average Geomean Harmean Minimum
    97.1    35.2    28.9    23.7     6.9


 ######################################################################
 Command - ./RPiTemperature Passes 36, Seconds 30
    O/C2 - ./RPiTemperature Passes 72, Seconds 15

               Normal        Overclocked
  Seconds    OGL  OG+LPs     OGL  OG+LPs

       0    50.3    49.8    50.8    51.4
      15                            59.5
      30    56.2    56.2    56.2    67.0
      45                            70.2
      60    60.0    60.5    61.1    71.8
      75                            72.9 Illegal
      90    61.1    62.7    63.8    70.2 Instruction
     115                            65.9
     120    62.1    63.8    63.8    64.3 Restart OGL
     135                            62.7 Screen Froze
     150    62.1    64.3    64.8
     180    63.2    65.4    64.8
     210    63.8    65.4    65.9
     240    64.3    65.9    65.9
     270    64.3    66.4    65.9
     300    64.3    66.4    65.9
     330    64.8    66.4    66.4
     360    64.8    67.0    66.4
     390    65.4    67.0    67.0
     420    64.8    68.1    67.0
     450    65.4    67.0    67.5
     480    64.8    68.6    67.0
     510    65.4    68.1    67.0
     540    65.9    68.1    68.1
     570    65.9    68.1    67.5
     600    65.9    68.6    68.1
     630    66.4    68.6    68.6
     660    66.4    69.1    68.1
     690    66.4    68.6    68.1
     720    66.4    69.1    68.1
     750    65.9    69.1    68.1
     780    66.4    69.1    68.1
     810    67.0    69.1    68.1
     840    66.4    69.7    68.1
     870    67.0    69.1    68.6
     900    67.0    69.1    69.1
     930    60.0    64.3    65.4
     960    56.8    59.5    59.5
     990    55.1    57.3    57.3
    1020    53.5    56.8    55.7
    1050    53.0    55.7    55.7

  


To Start


Performance Monitor

JavaDraw - following show JavaDraw benchmark speeds, at 10 seconds per test, and simultaneous vmstat performance monitor CPU utilisation, with 5 second samples. The normal tests were run, then again, with an affinity setting to use 1 CPU core.

When running a CPU benchmark, %user time is recorded as 25%, as most of the single core test, with a little system overhead. For some reason, JavaDraw seemed to use more than one core for the last two tests. Overall, the details show that Raspberry Pi 2 can use more than one CPU core to improve performance on drawing with a Java program.

   
                               Normal             Affinity 1 CPU
                               FPS  %usr  %sys    FPS  %usr  %sys
  
  Bitmap Twice Pass 1         45.0    43     9   22.1    22     4
                                      43     8           23     6
  Bitmap Twice Pass 2         56.8    42     9   35.8    23     6
                                      41    10           24     6
  Plus 2 Circles              57.8    41     9   35.4    24     6
                                      44     8           25     5
  Plus 200 Rand Circles       54.9    43     8   36.0    25     5
                                      43     7           23     7
  Plus 320 Long Lines         38.3    42     8   33.4    33     5
                                      42     9           32     6
  Plus 4000 Rand Circles      25.1    48     9   22.4    38     5

  vmstat command for 20 5 second samples - vmstat 5 20 > vmstatlog1.txt
  benchmark commands - java JavaDrawPC and taskset 0x00000001 java JavaDrawPC
   


To Start


Assembly Code

Linpack benchmark performance is completely dependent on the daxpy function with a linked triad dy[i] = dy[i] + da * dx[i], with an unrolled to loop containing four linked add and multiply statements. Compilers can produce a range of instruction combinations, to cover a number of different accesses to the function. The following seem to be the most likely frequent instructions executed. The linpackPiA7SP compilation has instructions the same as linpackPiA7, except using 32 bit registers, example vfma.f32 s14, s0, s13, maybe executing at the same speed as the 64 bit vfma instruction.

Instruction fmacd is double precision multiply-accumulate and vfma is fused floating-point multiply accumulate, where the result of the multiply is not rounded before the accumulation, and might be the reason for different numeric answers. If true to form, FMA can produce a maximum of two results per CPU clock cycle, doubling performance.

Next are details of assembly code for BusSpeed reading all data, where RAM speed from the original PiA6 benchmark is at half the expected speed, and slower than reading every other word. The benchmark test loop has 64 AND statements, read sequentially. The only difference appears to be that gcc 4.8, for PiA7, produces negative indexing.

  

LinpackPiA6 LinpackPiA7

gcc 4.6 armv6 vfp gcc 4.8 cortex-a7 vfpv4 .L185: .L208: fldd d6, [r1, #-24] fldd d16, [r1, #-24] fldd d7, [r3, #-24] fldd d19, [r3, #-24] fldd d5, [r3, #-16] fldd d18, [r3, #-16] fldd d4, [r3, #-8] vfma.f64 d19, d0, d16 fmacd d7, d0, d6 mov r4, r1 mov r4, r1 fldd d17, [r3, #-8] fldd d3, [r3, #0] fldd d16, [r3] add r2, r2, #4 add r2, r2, #4 cmp r0, r2 add r1, r1, #32 fstd d7, [r3, #-24] cmp r0, r2 fldd d7, [r1, #-16] fstd d19, [r3, #-24] fmacd d5, d0, d7 fldd d19, [r1, #-48] fstd d5, [r3, #-16] vfma.f64 d18, d0, d19 fldd d7, [r1, #-8] fstd d18, [r3, #-16] add r1, r1, #32 fldd d18, [r1, #-40] fmacd d4, d0, d7 vfma.f64 d17, d0, d18 fstd d4, [r3, #-8] fstd d17, [r3, #-8] fldd d7, [r4, #0] fldd d17, [r4] fmacd d3, d0, d7 vfma.f64 d16, d0, d17 fmrrd r4, r5, d3 fmrrd r4, r5, d16 strd r4, [r3], #32 strd r4, [r3], #32 bgt .L185 bgt .L208

busspeedPiA6 busspeedPiA7

.L19: .L17: ldmia r3, {r0, ip} ldr r0, [r3] ldr r1, [r3, #8] add r2, r2, #64 ldr r5, [r3, #248] ldr ip, [r3, #4] and ip, ip, r0 add r3, r3, #256 ldr r0, [r3, #12] ldr r1, [r3, #-248] and ip, ip, r1 and ip, ip, r0 ldr r1, [r3, #16] ldr r0, [r3, #-244] and ip, ip, r0 and ip, ip, r1 ldr r0, [r3, #20] ldr r1, [r3, #-240] and ip, ip, r1 and ip, ip, r0 To ldr r0, [r3, #244] ldr r1, [r3, #-16] and ip, ip, r1 and ip, ip, r0 and ip, ip, r0 ldr r0, [r3, #-12] ldr r4, [r7] and ip, ip, r1 ldr r0, [r3, #252] ldr r4, [r3, #-8] add r2, r2, #64 ldr r5, [r7] and ip, ip, r5 and ip, ip, r0 and r1, ip, r0 ldr r0, [r3, #-4] cmp r4, r2 and ip, ip, r4 and r6, r1, r6 cmp r5, r2 add r3, r3, #256 and r1, ip, r0 bgt .L19 and r6, r1, r6 bgt .L17 Both Mainly ldr r1 and r0 ldr r0 and r1


To Start


Roy Longbottom at Linkedin  Roy Longbottom May 2016



The Official Internet Home for my Benchmarks is via the link
Roy Longbottom's PC Benchmark Collection