Integrasi Komputer    
   
Daftar Isi
(Sebelumnya) Blue Coat SystemsBlue Systems (Berikutnya)

Blue Gene

A Blue Gene/P supercomputer at Argonne National Laboratory
Hierarchy of Blue Gene processing units

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the PFLOPS (petaFLOPS) range, with low power consumption.

The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. Blue Gene systems have led for several years the TOP500[1] and Green500[2] rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2008 National Medal of Technology and Innovation.[4]

Contents

History

In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding.[5] The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T.J. Watson Research Center.

In 1999 Alan Gara moved from Columbia University, were he had been leading work on the QCDOC architecture[6] to the IBM T.J. Watson Research Center. The QCDOC system was a special purpose computer for QCD computations; it used a chip with an embedded PowerPC core on it. At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

In November 2001, Lawrence Livermore National Laboratory (LLNL) joined IBM as a research partner for Blue Gene. Development proceeded at IBM T.J. Watson Research Center and at IBM Rochester with the goal of delivering a system to LLNL.

Blue Gene/L

A Blue Gene/L cabinet
The block scheme of the Blue Gene/L ASIC including dual PowerPC 440 cores.

In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS.[1] It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL[7] gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 PetaFLOPS mark. The system was built in Rochester, MN IBM plant.

While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.

While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize.

In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[8] At Supercomputing 2006,[9] Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[10] In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds).[11]

Major features

The Blue Gene/L supercomputer was unique in the following aspects:[12]

  • Trading the speed of processors for lower power consumption. Blue Gene/L used low frequency and low power embedded PowerPC cores with floating point accelerators. While the performance of each chip was relatively low, the system could achieve better performance to energy ratio, for applications that could use larger numbers of nodes.
  • Dual processors per node with two working modes: co-processor mode where one processor handles computation and the other handles communication; and virtual-node mode, where both processors are available to run user code, but the processors share both the computation and the communication load.
  • System-on-a-chip design. All node components were embedded on one chip, with the exception of 512 MB external DRAM.
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536)
  • Three-dimensional torus interconnect with auxiliary networks for global communications (broadcast and reductions), I/O, and management
  • Lightweight OS per node for minimum system overhead (system noise).

Architecture

One Blue Gene/L node board
A schematic overview of a Blue Gene/L supercomputer

The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Blue Gene/L Compute or I/O node was a single ASIC with associated DRAM memory chips. The ASIC integrated two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one another.

Compute nodes were packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There were 32 node boards per cabinet/rack.[13] By the integration of all essential sub-systems on a single chip, and the use of low-power logic, each Compute or I/O node dissipated low power (about 17 watts, including DRAMs). This allowed very aggressive packaging of up to 1024 compute nodes plus additional I/O nodes in the standard 19-inch rack, within reasonable limits of electrical power supply and air cooling. The performance metrics in terms of FLOPS per watt, FLOPS per m2 of floorspace and FLOPS per unit cost allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate a row of faulty components to allow the machine to continue to run.

Each Blue Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided communication to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provided access to any node for configuration, booting and diagnostics. To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positive integer power of 2, with at least 25 = 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use.

Blue Gene/L compute nodes used a minimal operating system supporting a single user program. Only a subset of POSIX calls was supported, and only one process could run at a time on node in co-processor mode—or one process per CPU in virtual mode. Programmers needed to implement green threads in order to simulate local concurrency. Application development was usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby[14] and Python[15] have been ported to the compute nodes.

Blue Gene/P

A Blue Gene/P node card
A schematic overview of a Blue Gene/P supercomputer

In June 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory's Leadership Computing Facility.[16]

Design

The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains four PowerPC 450 processor cores, running at 850 MHz. The cores are cache coherent and the chip can operate as a 4-way symmetric multiprocessor (SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dual DDR2 memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a "compute node". A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores).[17] By using many small, low-power, densely packaged chips, Blue Gene/P exceeded the power efficiency of other supercomputers of its generation, and at 371 MFLOPS/W Blue Gene/P installations ranked at or near the top of the Green500 lists in 2007-2008.[2]

Installations

The following is an incomplete list of Blue Gene/P installations. Per November 2009, the TOP500 list contained 15 Blue Gene/P installations of 2-racks (2048 nodes, 8196 processor cores, 23.86 TFLOPS Linpack) and larger.[1]

  • On November 12, 2007, the first Blue Gene/P installation, JUGENE, with 16 racks (16,384 nodes, 65,536 processors) was running at Forschungszentrum Jülich in Germany with a performance of 167 TFLOPS.[18] When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world. In 2009, JUGENE was upgraded to 72 racks (73,728 nodes, 294,912 processor cores) with 144 terabytes of memory and 6 petabytes of storage, and achieved a peak performance of 1 PetaFLOPS. This configuration incorporated new air-to-water heat exchangers between the racks, reducing the cooling cost substantially.[19] JUGENE was shut down in July 2012 and replaced by the Blue Gene / Q system JUQUEEN.
  • A 13.9 TFLOPS Blue Gene/P was installed at the University of Rochester in Rochester, New York in 2008.[20] The system consists of a single rack (1,024 compute nodes) and 180 TB of storage.[21]
  • The first laboratory in the United States to receive a Blue Gene/P was Argonne National Laboratory. At completion, the 40-rack (40960 nodes, 163840 processor cores) "Intrepid" system was ranked #3 on the June 2008 Top 500 list.[22] The Intrepid system is one of the major resources of the INCITE program, in which processor hours are awarded to "grand challenge" science and engineering projects in a peer-reviewed competition.
  • Lawrence Livermore National Laboratory installed a 36-rack Blue Gene/P installation, "Dawn", in 2009.
  • The King Abdullah University of Science and Technology (KAUST) installed a 16-rack Blue Gene/P installation, "Shaheen", in 2009.
  • A Blue Gene/P system is the central processor for the Low Frequency Array for Radio astronomy (LOFAR) project in the Netherlands and surrounding European countries. This application uses the streaming data capabilities of the machine.
  • A 2-rack Blue Gene/P has been installed on September 9, 2008 in Sofia, the capital of Bulgaria, and is operated by the Bulgarian Academy of Sciences and the Sofia University.[23]
  • The first Blue Gene/P in the ASEAN region was installed in 2010 at the Universiti of Brunei Darussalam’s research centre, the UBD-IBM Centre. The installation has prompted research collaboration between the university and IBM research on climate modeling that will investigate the impact of climate change on flood forecasting, crop yields, renewable energy and the health of rainforests in the region among others.[24]
  • In 2010, a Blue Gene/P was installed at the University of Melbourne for the Victorian Life Sciences Computation Initiative.[25]
  • In 2012, a Blue Gene/P was installed at Rice University and will be jointly administered with the University of Sao Paulo.[26]

Applications

  • Veselin Topalov, the challenger to the World Chess Champion title in 2010, confirmed in an interview that he had used a Blue Gene/P supercomputer during his preparation for the match.[27]
  • The Blue Gene/P computer has been used to simulate approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections.[28]
  • The IBM Kittyhawk project team has ported Linux to the compute nodes and demonstrated generic Web 2.0 workloads running at scale on a Blue Gene/P. Their paper published in the ACM Operating Systems Review describes a kernel driver that tunnels Ethernet over the tree network, which results in all-to-all TCP/IP connectivity.[29][30] Running standard Linux software like MySQL, their performance results on SpecJBB rank among the highest on record.[citation needed]
  • In 2011 a Rutgers University / IBM / University of Texas team linked the KAUST Shaheen installation together with a Blue Gene/P installation at the IBM Watson Research Center into a "federated high performance computing cloud", winning the IEEE SCALE 2011 challenge with an oil reservoir optimization application.[31]

Blue Gene/Q

The IBM Blue Gene/Q installed at Argonne National Laboratory, near Chicago, Illinois.

The third supercomputer design in the Blue Gene series, Blue Gene/Q reached 20 Petaflops[32] in 2012. Blue Gene/Q continues to expand and enhance the Blue Gene/L and /P architectures.

Design

The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit PowerPC A2 processor cores are 4-way simultaneously multithreaded, and run at 1.6 GHz. Each processor core has a SIMD Quad-vector double precision floating point unit (IBM QPX). The processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache, operating at half core speed. The L2 cache is multi-versioned, supporting transactional memory and speculative execution, and has hardware support for atomic operations.[33] L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-chip links. 16 Processor cores are used for computing, and a 17th core for operating system assist functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare in case one of the other cores is permanently damaged. The spared-out core is shut down in functional operation. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It will deliver a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core).[34]

A Q32[35] compute drawer will have 32 compute cards, each water cooled and connected into a 5D network torus.[36]

Racks will have 32 compute drawers for a total of 1024 compute nodes, 16,384 user cores and 16 TB RAM.[36]

Separate I/O drawers will be air cooled and contain 8 compute cards and 8 PCIe expansion slots for Infiniband or 10 Gigabit Ethernet networking.[36]

Performance

At the time of the Blue Gene/Q system announcement in November 2011, an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list[1] with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list[3] with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W.[2]

In June 2012, Blue Gene/Q installations took the top positions in all three lists: TOP500,[1] Graph500 [3] and Green500.[2]

Installations

The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1]

  • A Blue Gene/Q system called Sequoia was delivered to the Lawrence Livermore National Laboratory (LLNL) beginning in 2011 and was fully deployed in June 2012. It is part of the Advanced Simulation and Computing Program running nuclear simulations and advanced scientific research. It consists of 96 racks (comprising 98,304 compute nodes with 1.6 million processor cores and 1.6 PB of memory) covering an area of about 3,000 square feet (280 m2).[37] In June 2012, the system was ranked as the world's fastest supercomputer.[38][39] at 20.1 PFLOPS peak, 16.32 PFLOPS sustained (Linpack), drawing up to 7.9 megawatts of power.[1] These performance metrics also classify Sequoia, together with other Blue Gene/Q systems, as one of the greenest supercomputers at over 2 GFLOPS/W.[2]
  • A 209 TFLOPS (peak) Blue Gene/Q system was installed at the University of Rochester in July, 2012.[42] This system is part of the Health Sciences Center for Computational Innovation, which is dedicated to the application of high-performance computing to research programs in the health sciences. The system consists of a single rack (1,024 compute nodes) with 400 TB of high-performance storage.[21] It was identified as one of the world's most energy-efficient supercomputers, tied at #3 on the June 2012 Green 500 list.[43]
  • A 838 TFLOPS (peak) Blue Gene/Q system called Avoca was installed at the Victorian Life Sciences Computation Initiative in June, 2012.[44] This system is part of a collaboration between IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases.[45] The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM.[46]

Applications

Record-breaking science applications have been run on the BG/Q, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,[47] while the Cardioid code,[48][49] which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both on Sequoia.

See also

  • CNK operating system
  • Deep Blue (chess computer)
  • INK (operating system)
  • Watson (computer)

References

  1. ^ a b c d e f g "The Top500 List". http://www.top500.org.
  2. ^ a b c d e "The Green500 List". http://www.green500.org.
  3. ^ a b c "The Graph500 List". http://www.graph500.org.
  4. ^ Harris, Mark (September 18, 2009). "Obama honours IBM supercomputer". Techradar. http://www.techradar.com/news/computi ng/obama-honours-ibm-supercomputer-63 6869. Retrieved 2009-09-18.
  5. ^ "Blue Gene: A Vision for Protein Science using a Petaflop Supercomputer". IBM Systems Journal, Special Issue on Deep Computing for the Life Sciences, 40 (2). 
  6. ^ Boyle, P. A., Chen, D., Christ, N. H., Clark, M. A., Cohen, S. D., Cristian, C., Dong, Z., Gara, A., Joo, B., Jung, C., Kim, C., Levkova, L. A., Liao, X., Liu, G., Mawhinney, R. D., Ohta, S., Petrov, K., Wettig, T. and Yamaguchi, A. (march 2005). "Overview of the QCDSP and QCDOC computers". IBM Journal of Research and Development 49 (2.3): 351–365. 
  7. ^ "Lawrence Livermore National Laboratory: BlueGene/L". http://asc.llnl.gov/computing_resourc es/bluegenel/.
  8. ^ hpcwire.com
  9. ^ SC06
  10. ^ hpcchallenge.org
  11. ^ bbc.co.uk
  12. ^ "Blue Gene". IBM Journal of Research and Development 49 (2/3). 2005. 
  13. ^ Bluegene/L Configuration https://asc.llnl.gov/computing_resour ces/bluegenel/configuration.html
  14. ^ ece.iastate.edu
  15. ^ William Scullin (March 12, 2011). "Python for High Performance Computing". Atlanta, GA. http://us.pycon.org/2011/home/.
  16. ^ "IBM Triples Performance of World's Fastest, Most Energy-Efficient Supercomputer". 2007-06-27. http://www-03.ibm.com/press/us/en/pre ssrelease/21791.wss. Retrieved 2011-12-24.
  17. ^ "Overview of the IBM Blue Gene/P project". IBM Journal of Research and Development. Jan 2008. http://dx.doi.org/10.1147/rd.521.0199.
  18. ^ "Supercomputing: Jülich Amongst World Leaders Again". IDG News Service. 2007-11-12. 
  19. ^ "IBM Press room - 2009-02-10 New IBM Petaflop Supercomputer at German Forschungszentrum Juelich to Be Europe's Most Powerful". www-03.ibm.com. 2009-02-10. http://www-03.ibm.com/press/us/en/pre ssrelease/26657.wss. Retrieved 2011-03-11.
  20. ^ http://www.urmc.rochester.edu/news/st ory/index.cfm?id=3498
  21. ^ a b http://www.circ.rochester.edu/resourc es.html
  22. ^ "Argonne's Supercomputer Named World’s Fastest for Open Science, Third Overall"
  23. ^ Вече си имаме и суперкомпютър, Dir.bg, 9 September 2008
  24. ^ "IBM and Universiti Brunei Darussalam to Collaborate on Climate Modeling Research". IBM News Room. http://www-03.ibm.com/press/us/en/pre ssrelease/32755.wss. Retrieved 18 October 2012.
  25. ^ "IBM Press room - 2010-02-11 IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Research - Australia". 03.ibm.com. 2010-02-11. http://www-03.ibm.com/press/au/en/pre ssrelease/29383.wss. Retrieved 2011-03-11.
  26. ^ "Rice University, IBM partner to bring first Blue Gene supercomputer to Texas, March 2012". http://news.rice.edu/2012/03/30/rice- university-ibm-partner-to-bring-first -blue-gene-supercomputer-to-texas/.
  27. ^ "Topalov training with super computer Blue Gene P". Chessdom. http://players.chessdom.com/veselin-t opalov/topalov-blue-gene-p. Retrieved 21 May 2010.
  28. ^ Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 91.
  29. ^ "Project Kittyhawk: A Global-Scale Computer". http://www.research.ibm.com/kittyhawk /.
  30. ^ Project Kittyhawk: building a global-scale computer
  31. ^ "Rutgers-led Experts Assemble Globe-Spanning Supercomputer Cloud". http://news.rutgers.edu.+2011-07-06. http://news.rutgers.edu/medrel/specia l-content/summer-2011/rutgers-led-exp erts-20110706. Retrieved 2011-12-24.
  32. ^ "IBM announces 20-petaflops supercomputer". Kurzweil. 18 November 2011. http://www.kurzweilai.net/ibm-announc es-20-petaflops-supercomputer. Retrieved 13 November 2012. "IBM has announced the Blue Gene/Q supercomputer, with peak performance of 20 petaflops"
  33. ^ "Memory Speculation of the Blue Gene/Q Compute Chip". http://wands.cse.lehigh.edu/IBM_BQC_P ACT2011.ppt. Retrieved 2011-12-23.
  34. ^ "The Blue Gene/Q Compute chip". http://www.hotchips.org/wp-content/up loads/hc_archives/hc23/HC23.18.1-many core/HC23.18.121.BlueGene-IBM_BQC_HC2 3_20110818.pdf. Retrieved 2011-12-23.
  35. ^ IBM Blue Gene/Q supercomputer delivers petascale computing for high-performance computing applications
  36. ^ a b c "IBM uncloaks 20 petaflops BlueGene/Q super". The Register. 2010-11-22. http://www.theregister.co.uk/2010/11/ 22/ibm_blue_gene_q_super/. Retrieved 2010-11-25.
  37. ^ Feldman, Michael (2009-02-03). "Lawrence Livermore Prepares for 20 Petaflop Blue Gene/Q". HPCwire. http://www.hpcwire.com/features/Lawre nce-Livermore-Prepares-for-20-Petaflo p-Blue-GeneQ-38948594.html. Retrieved 2011-03-11.
  38. ^ B Johnston, Donald (2012-06-18). "NNSA's Sequoia supercomputer ranked as world's fastest". https://www.llnl.gov/news/newsrelease s/2012/Jun/NR-12-06-07.html. Retrieved 2012-06-23.
  39. ^ TOP500 Press Release
  40. ^ MIRA: World's fastest supercomputer
  41. ^ ANL's Mira Homepage
  42. ^ http://www.rochester.edu/news/show.ph p?id=4192
  43. ^ http://www.green500.org/news/june-201 2-green500-list-released?q=lists/gree n201206
  44. ^ http://themelbourneengineer.eng.unime lb.edu.au/2012/02/worlds-greenest-com puter-comes-to-melbourne/
  45. ^ Victorian Life Sciences Computation Initiative
  46. ^ VLSCI's Computer & Software Configuration
  47. ^ S. Habib, V. Morozov, H. Finkel, A. Pope, K. Heitmann, K. Kumaran, T. Peterka, J. Insley, D. Daniel, P. Fasel, N. Frontiere, and Z. Lukic. "The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q". http://arxiv.org/abs/1211.4864.
  48. ^ "Cardioid Cardiac Modeling Project". http://researcher.watson.ibm.com/rese archer/view_project.php?id=2992.
  49. ^ "Venturing into the Heart of High-Performance Computing Simulations". https://str.llnl.gov/Sep12/streitz.ht ml.

External links

(Sebelumnya) Blue Coat SystemsBlue Systems (Berikutnya)