Command:mpirun -n 24 ./mmult3_c.exe 4608
Resources:1 node (12 physical, 24 logical cores per node)
Tasks:24 processes
Machine:mic2
Start time:Fri Feb 20 21:29:52 2015
Total time:60 seconds (1 minute)
Full path:/scratch/allinea/mmult/3_fix
Input file:
Notes:

Error: javascript is not running

The graphs in this Performance Report require javascript, which is disabled or not working.

Check whether your javascript support is enabled or try another browser.

Remember, you can always contact support@allinea.com, we're very nice!

Summary: mmult3_c.exe is MPI-bound in this configuration
CPU44.9%

Time spent running application code. High values are usually good.

This is low; it may be worth improving MPI or I/O performance first.

MPI54.9%

Time spent in MPI calls. High values are usually bad.

This is high; check the MPI breakdown for advice on reducing it.

I/O0.2%

Time spent in filesystem I/O. High values are usually bad.

This is very low; however single-process I/O often causes large MPI wait times.

This application run was MPI-bound. A breakdown of this time and advice for investigating further is in the MPI section below.


CPU
A breakdown of the 44.9% CPU time:
Scalar numeric ops11.2%
Vector numeric ops7.5%
Memory accesses81.3%
The per-core performance is memory-bound. Use a profiler to identify time-consuming loops and check their cache performance.
Little time is spent in vectorized instructions. Check the compiler's vectorization advice to see why key loops could not be vectorized.
MPI
A breakdown of the 54.9% MPI time:
Time in collective calls69.0%
Time in point-to-point calls31.0%
Effective process collective rate0.00e+00 
Effective process point-to-point rate3.42e+07 
Most of the time is spent in collective calls with a very low transfer rate. This suggests load imbalance is causing synchonization overhead; use an MPI profiler to investigate further.
The point-to-point transfer rate is low. This can be caused by inefficient message sizes, such as many small messages, or by imbalanced workloads causing processes to wait.
I/O
A breakdown of the 0.2% I/O time:
Time in reads0.0%
Time in writes100.0%
Effective process read rate0.00e+00 
Effective process write rate1.09e+08 
Most of the time is spent in write operations with an average effective transfer rate. It may be possible to achieve faster effective transfer rates using asynchronous file operations.
Threads
A breakdown of how multiple threads were used:
Computation0.0%
Synchronization0.0%
Physical core utilization199.7%
Involuntary context switches per second373239.4
No measurable time is spent in multithreaded code.
Memory
Per-process memory usage may also affect scaling:
Mean process memory usage1.62e+08 
Peak process memory usage5.58e+08 
Peak node memory usage32.0%
There is significant variation between peak and mean memory usage. This may be a sign of workload imbalance or a memory leak.
The peak node memory usage is low. Running with fewer MPI processes and more data on each process may be more efficient.