The Wayback Machine - https://web.archive.org/web/20090801102108/http://www.tacc.utexas.edu:80/resources/hpcsystems/
Click here to go to the TACC Home Page

HPC Systems


Sun Constellation Linux Cluster: Ranger

Ranger
System Name: Ranger
Host Name: ranger.tacc.utexas.edu
IP Address: 129.114.50.163
Operating System: Linux
Number of Nodes: 3,936
Number of Processing Cores: 62,976
Total Memory: 123TB
Peak Performance: 579.4TFlops
Total Disk: 1.73PB (shared)
31.4TB (local)
Description:

Ranger is the largest computing system in the world for open science research. As the first of the new NSF Track2 HPC acquisitions, this system provides unprecedented computational capabilities to the national research community and ushers in the petascale science era. Ranger will enable breakthrough science that has never before been possible, and will provide groundbreaking opportunities in computational science & technology research from parallel algorithms to fault tolerance, from scalable visualization to next generation programming languages.

Ranger went into production on February 4, 2008 using Linux (based on a CentOS distribution). The system components are connected via a full-CLOS InfiniBand interconnect. Eighty-two compute racks house the quad-socket compute infrastructure, with additional racks housing login, I/O, and general management hardware. Compute nodes are provisioned using local storage. Global, high-speed file systems will be provided, using the Lustre file system, running across 72 I/O servers. Users will interact with the system via four dedicated login servers, and a suite of eight high-speed data servers. Resource management for job scheduling will be provided with Sun Grid Engine (SGE).

Any researcher at a U.S. institution can submit a proposal to request an allocation of cycles on the system. The request must describe the research, justify the need for such a powerful system to achieve new scientific discoveries, and demonstrate that the proposer's team has the expertise to utilize the resource effectively.

To submit a proposal to request an allocation, please visit the TeraGrid website.

Researchers at Texas higher education institutions, please contact This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

For more information about using Ranger, see the Ranger User Guide.


Dell Linux Cluster: Lonestar

Lonestar
System Name: Lonestar
Host Name: lonestar.tacc.utexas.edu
(lslogin1.tacc.utexas.edu
lslogin2.tacc.utexas.edu)
IP Address: 129.114.50.31 & .32
Operating System: Linux
Number of Processors: 5,840 (compute)
Total Memory: 11.6 TB
Peak Performance: 62 TFLOPS
Total Disk: 106.5TB(local), 103TB(global)
Description:

The TACC Dell Linux Cluster contains 5,840 cores within 1,460 Dell PowerEdge 1955 compute blades (nodes), 16 PowerEdge 1850 compute-I/O server-nodes, and 2 PowerEdge2950 (2.66GHz) login/management nodes. Each compute node has 8GB of memory, and the login/development nodes have 16GB. The system storage includes a 103TB parallel (WORK) Lustre file system, and 106.5TB of local compute-node disk space (73GB/node). An InfiniBand switch fabric, employing PCI Express interfaces, interconnects the nodes (I/O and compute) through a fat-tree topology, with a point-to-point bandwidth of 1GB/sec (unidirectional speed).

Compute nodes have two processors, each a Xeon 5100 series 2.66GHz dual-core processor with a 4MB unified (Smart) L2 cache. Peak performance for the four cores is 42.6 GFLOPS. Some of the key features of the Core micro-architecture are: dual-core, L1 Instruction cache, 14 unit pipeline, eight pre-fetch units, Macro Ops Fusion, double-speed integer units, Advanced Smart (sharing) L2 cache, and 16 new SSE3 instructions. The memory system uses Fully Buffered DIMMS (FB-DIMMS) and a 1333 MHz (10.7 GB/sec) front side bus.

For more information about using Lonestar, see the Dell Linux Cluster User Guide.


IBM Power5 System: Champion

Champion
System Name: Champion
Host Name: champion.tacc.utexas.edu
IP Address: 129.114.4.52
Operating System: AIX
Number of Processors: 96 Power5 processors
Total Memory: 192 GB
Peak Performance: 730 GFLOPS
Total Disk: 7.2 TB
Description:

The TACC IBM Power5 System consists of 12 IBM P5 575 shared memory server nodes. Each server node contains 8 Power5 processors running at 1.9 GHz. In total, the 96 processor system has a peak performance of 730 GFLOPS with an aggregate memory of 192 GB. Each node is also supported by 36 GB of local disk, for a total of 432 GB, and a faster 7.2 TB GPFS file system. All server nodes are connected by a IBM high performance Federation switch. The Power5 systems run AIX, a scalable UNIX operating system with High Availability Cluster Multi-Processing (HACMP) capabilities.

The new IBM Power5 processor offers industry-leading performance on floating point calculations, including almost double the performance of IBM's Power4 processor. The key physical technologies of the chip are Silicon-on-insulator (SOI), copper connections, 130 million transistors per die, an on-chip L2 cache, and Multi-Chip-Modules (MCM). The key architectural features are a high-speed clock, 64-bit architecture, 3-tier cache hierarchy, superscalar with speculative branching, out-of-order execution, pre-fetch streaming and Simultaneous Multi-threading or SMT technology. Evolving from the Power4/Power4+ architecture, the Power5 chip architecture now has faster processor speed and larger L2 and L3 cache. In addition, the L3 chips have now been moved closer to the chip on the module, an on-chip memory controller has been added, and the number of registers has been increased.

For more information about using Champion, see the IBM Power5 System User Guide.



Dell Linux Serial Cluster: Stampede

Stampede
System Name: Stampede
Host Name: slogin1.tacc.utexas.edu
IP Address: 129.114.50.77
Operating System: Linux
Number of Processors: 1736 (compute cores)
Total Memory: 1800 GB
Peak Performance: 16 TFLOPS
Total Disk: 520 GB (local)
536 GB (shared)
68 TB(global, shared)
Description:

The current configuration of Stampede consists of 217 compute nodes, two login nodes, and a dedicated file server attached to one of the compute nodes. The nodes are interconnected using Gigabit Ethernet technology. Each compute node has two quad core Intel Clovertown processors, 8 GB of memory, and 600 GB of local disk space (of which 520 GB is available to the user) and 536 GB of shared disk. The dedicated file server provides 3.7 TB of storage to certain users and is mounted on all of the compute nodes. The system can access 68 TB of global, parallel file storage that is managed by the Lustre file system and shared with the TACC Lonestar system.

For more information about using Stampede, see the Stampede User Guide.


For information about how to request an allocation on these systems, go to the HPC section of the Allocations page.