Graph500
The Graph500 is a rating of supercomputer systems, focused on data-intensive loads. The project was announced on International Supercomputing Conference in June 2010. The first list was published at the ACM/IEEE Supercomputing Conference in November 2010. New versions of the list are published twice a year. The main performance metric used to rank the supercomputers is GTEPS (giga- traversed edges per second).
Richard Murphy from Sandia National Laboratories, says that "The Graph500's goal is to promote awareness of complex data problems", instead of focusing on computer benchmarks like HPL (High Performance Linpack), which TOP500 is based on.[1]
Despite its name, there were several hundreds of systems in the rating, growing up to 174 in June 2014.[2]
The algorithm and implementation that won the championship is published in the paper titled "Extreme scale breadth-first search on supercomputers".[3]
There is also list Green Graph 500, which uses same performance metric, but sorts list according to performance per Watt, like Green 500 works with TOP500 (HPL).
Benchmark
The benchmark used in Graph500 stresses the communication subsystem of the system, instead of counting double precision floating-point.[1] It is based on a breadth-first search in a large undirected graph (a model of Kronecker graph with average degree of 16). There are three computation kernels in the benchmark: the first kernel is to generate the graph and compress it into sparse structures CSR or CSC (Compressed Sparse Row/Column); the second kernel does a parallel BFS search of some random vertices (64 search iterations per run); the third kernel runs a single-source shortest paths (SSSP) computation. Six possible sizes (Scales) of graph are defined: toy (226 vertices; 17 GB of RAM), mini (229; 137 GB), small (232; 1.1 TB), medium (236; 17.6 TB), large (239; 140 TB), and huge (242; 1.1 PB of RAM).[4]
The reference implementation of the benchmark contains several versions:[5]
- serial high-level in GNU Octave
- serial low-level in C
- parallel C version with usage of OpenMP
- two versions for Cray-XMT
- basic MPI version (with MPI-1 functions)
- optimized MPI version (with MPI-2 one-sided communications)
The implementation strategy that have won the championship on the Japanese K computer is described in.[6]
Top 10 ranking
According to June 2023 release of the list the new Wuhan supercomputer is highest ranked for the SSSP results with 19039.1 GTEPS (and Fugaku is 4th) while for the BFS results its 2nd there with a different lower measurement for GTEPS:[7]
Rank | Country | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|---|
1 | Japan | RIKEN Advanced Institute for Computational Science | Supercomputer Fugaku (Fujitsu A64FX) | 152064 | 7299072 | 42 | 137096 |
2 | China | Wuhan | Kunpeng 920+Tesla A100 | 252 | 6999552 | 40 | 121804.3 |
3 | USA | Frontier | HPE Cray EX235a | 9248 | 8730112 | 40 | 29654.6 |
4 | China | Pengcheng Lab | Pengcheng Cloudbrain-II (Kunpeng 920+Ascend 910) | 488 | 93696 | 40 | 25242.9 |
5 | China | National Supercomputing Center in Wuxi | Sunway TaihuLight (Sunway MPP) | 40768 | 10599680 | 40 | 23755.7 |
Japan also has a new computer ranked 8th.
2022
According to November 2022 release of the list:[8]
Rank | Country | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|---|
1 | Japan | RIKEN Advanced Institute for Computational Science | Supercomputer Fugaku (Fujitsu A64FX) | 158976 | 7630848 | 41 | 102955 |
2 | China | Pengcheng Lab | Pengcheng Cloudbrain-II (Kunpeng 920+Ascend 910) | 488 | 93696 | 40 | 25242.9 |
3 | China | National Supercomputing Center in Wuxi | Sunway TaihuLight (Sunway MPP) | 40768 | 10599680 | 40 | 23755.7 |
4 | Japan | Information Technology Center, University of Tokyo | Wisteria/BDEC-01 (PRIMEHPC FX1000) | 7680 | 368640 | 37 | 16118 |
5 | Japan | Japan Aerospace Exploration Agency | TOKI-SORA (PRIMEHPC FX1000) | 5760 | 276480 | 36 | 10813 |
6 | EU | EuroHPC/CSC | LUMI-C (HPE Cray EX) | 1492 | 190976 | 38 | 8467.71 |
7 | US | Oak Ridge National Laboratory | OLCF Summit (IBM POWER9) | 2048 | 86016 | 40 | 7665.7 |
8 | Germany | Leibniz Rechenzentrum | SuperMUC-NG (ThinkSystem SD530 Xeon Platinum 8174 24C 3.1GHz Intel Omni-Path) | 4096 | 196608 | 39 | 6279.47 |
9 | Germany | Zuse Institute Berlin | Lise (Intel Omni-Path) | 1270 | 121920 | 38 | 5423.94 |
10 | China | National Engineering Research Center for Big Data Technology and System | DepGraph Supernode (DepGraph (+GPU Tesla A100)) | 1 | 128 | 33 | 4623.379 |
2016
According to June 2016 release of the list:[10]
Rank | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|
1 | Riken Advanced Institute for Computational Science | K computer (Fujitsu custom) | 82944 | 663552 | 40 | 38621.4 |
2 | National Supercomputing Center in Wuxi | Sunway TaihuLight (NRCPC - Sunway MPP) | 40768 | 10599680 | 40 | 23755.7 |
3 | Lawrence Livermore National Laboratory | IBM Sequoia (Blue Gene/Q) | 98304 | 1572864 | 41 | 23751 |
4 | Argonne National Laboratory | IBM Mira (Blue Gene/Q) | 49152 | 786432 | 40 | 14982 |
5 | Forschungszentrum Jülich | JUQUEEN (Blue Gene/Q) | 16384 | 262144 | 38 | 5848 |
6 | CINECA | Fermi (Blue Gene/Q) | 8192 | 131072 | 37 | 2567 |
7 | Changsha, China | Tianhe-2 (NUDT custom) | 8192 | 196608 | 36 | 2061.48 |
8 | CNRS/IDRIS-GENCI | Turing (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | Science and Technology Facilities Council – Daresbury Laboratory | Blue Joule (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | University of Edinburgh | DIRAC (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | EDF R&D | Zumbrota (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | Victorian Life Sciences Computation Initiative | Avoca (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
2014
According to June 2014 release of the list:[2]
Rank | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|
1 | RIKEN Advanced Institute for Computational Science | K computer (Fujitsu custom) | 65536 | 524288 | 40 | 17977.1 |
2 | Lawrence Livermore National Laboratory | IBM Sequoia (Blue Gene/Q) | 65536 | 1048576 | 40 | 16599 |
3 | Argonne National Laboratory | IBM Mira (Blue Gene/Q) | 49152 | 786432 | 40 | 14328 |
4 | Forschungszentrum Jülich | JUQUEEN (Blue Gene/Q) | 16384 | 262144 | 38 | 5848 |
5 | CINECA | Fermi (Blue Gene/Q) | 8192 | 131072 | 37 | 2567 |
6 | Changsha, China | Tianhe-2 (NUDT custom) | 8192 | 196608 | 36 | 2061.48 |
7 | CNRS/IDRIS-GENCI | Turing (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Science and Technology Facilities Council - Daresbury Laboratory | Blue Joule (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | University of Edinburgh | DIRAC (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | EDF R&D | Zumbrota (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Victorian Life Sciences Computation Initiative | Avoca (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
2013
According to June 2013 release of the list:[11]
Rank | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|
1 | Lawrence Livermore National Laboratory | IBM Sequoia (Blue Gene/Q) | 65536 | 1048576 | 40 | 15363 |
2 | Argonne National Laboratory | IBM Mira (Blue Gene/Q) | 49152 | 786432 | 40 | 14328 |
3 | Forschungszentrum Jülich | JUQUEEN (Blue Gene/Q) | 16384 | 262144 | 38 | 5848 |
4 | RIKEN Advanced Institute for Computational Science | K computer (Fujitsu custom) | 65536 | 524288 | 40 | 5524.12 |
5 | CINECA | Fermi (Blue Gene/Q) | 8192 | 131072 | 37 | 2567 |
6 | Changsha, China | Tianhe-2 (NUDT custom) | 8192 | 196608 | 36 | 2061.48 |
7 | CNRS/IDRIS-GENCI | Turing (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Science and Technology Facilities Council - Daresbury Laboratory | Blue Joule (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | University of Edinburgh | DIRAC (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | EDF R&D | Zumbrota (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Victorian Life Sciences Computation Initiative | Avoca (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
See also
References
- The Exascale Report (March 15, 2012). "The Case for the Graph 500 – Really Fast or Really Productive? Pick One". Inside HPC.
- "June 2014 | Graph 500". Archived from the original on June 28, 2014. Retrieved June 26, 2014.
- Ueno, Koji; Suzumura, Toyotaro; Maruyama, Naoya; Fujisawa, Katsuki; Matsuoka, Satoshi (2016). "Extreme scale breadth-first search on supercomputers". 2016 IEEE International Conference on Big Data (Big Data). pp. 1040–1047. doi:10.1109/BigData.2016.7840705. ISBN 978-1-4673-9005-7. S2CID 8680200.
- Performance Evaluation of Graph500 on Large-Scale Distributed Environment // IEEE IISWC 2011, Austin, TX; presentation
- "Graph500: адекватный рейтинг" (in Russian). Open Systems #1 2011.
- Ueno, K.; Suzumura, T.; Maruyama, N.; Fujisawa, K.; Matsuoka, S. (December 1, 2016). "Extreme scale breadth-first search on supercomputers". 2016 IEEE International Conference on Big Data (Big Data). pp. 1040–1047. doi:10.1109/BigData.2016.7840705. ISBN 978-1-4673-9005-7. S2CID 8680200.
- "Complete Results - Graph 500". June 14, 2017. Retrieved July 21, 2023.
- "November 2022; Graph 500". June 14, 2017. Retrieved November 18, 2022.
- "Fujitsu and RIKEN Take First Place in Graph500 Ranking with Supercomputer Fugaku". HPCwire. June 23, 2020. Retrieved August 8, 2020.
- "June 2016 | Graph 500". Archived from the original on June 24, 2016. Retrieved July 6, 2016.
- "June 2013 | Graph 500". Archived from the original on June 21, 2013. Retrieved June 19, 2013.