• >Home
  • >Systems & Software
  • >Systems


Parallel Computing, Storage, and Visualization 


The UNM Center for Advanced Research Computing supports supercomputing systems, high-throughput clusters, and large-scale disk storage for use by university researchers. CARC currently has over 2200 compute cores spanning a variety of distributed and shared-memory architectures. Online working NAS and nearline storage is provided by the Research Storage Consortium (RSC) HP x9000/7400 system (381 TB RAID6), with integrated tape library for data archiving. Additional tape backup is provided by an in-house developed Amanda-based system with 30 TB of LTO tape storage rotated through a 15 TB robotic tape library system.  These compute and storage systems are housed in a state-of-the-art, raised-floor, climate- and access-controlled machine room. CARC connects to Internet2 via UNM’s 10 Gbps backbone network. This backbone connects via a 10 Gbps link to the central Rio Grande Valley GIGA-POP, located in downtown Albuquerque and owned and operated by UNM. A summary of CARC compute resources can be found in the table or downloaded here.  Further details about the Galles Beowulf Cluster can be found below the following Supercomputer and Cluster Resource table. An NSF-style Facilities document is available to PIs for use in proposal preparation; email director@carc.unm.edu for more information.

CARC Supercomputer and Cluster Resources 

Machine Name









Dell PowerEdge 1950; Intel Xeon 5140, 2.33 GHz

SGI Altix ICE; Xeon X5365, 3.0 GHz

Dell PowerEdge R620; Intel Xeon E5-2670, 2.6 GHz

Silicon Mechanics A422.v3 Shared-Memory Multi Processor

AMD Opteron 6272, 2.1 GHz

Linux Networx Evolocity AMD Opteron 252, 2.6 GHz

IBM System x3755; AMD 8214, 2.2 GHz

Dell Optiplex GX620 /Intel Pentium D/ 2.13 GHz; Dell Optiplex 745/ Intel Core2/ 2.8 GHz / 3.0 GHz

Linux Operating System

Scientific Linux

Red Hat

Scientific Linux


Scientific Linux




Myrinet 2000

Dual QDR InfiniBand

 (separated IPC and I/O)

InfiniBand QDR





Dual 10Gbps SDR

GB Ethernet (Beowulf cluster)








200 (including 16-node Hadoop subsystem)





64 (32 + 32 FP co-processors)




Total cores









4 GB

2 GB

4 GB

256 GB shared

4 GB

4 GB

1 GB

Local disk/node

80 GB

Filesystem disk only

1 TB

Filesystem disk only

1 TB

1 TB

80-140 GB;     1 TB on Hadoop nodes

Peak FLOPS (theoretical), in TFLOPS








Processor Architecture

EM64T Intel

Xeon Woodcrest

EM64T Intel Xeon (Clovertown)

 Intel SandyBridge

AMD 6272


AMD 252


AMD 8214

Santa Rosa

Intel Pentium D, Intel Core 2

Local scratch space (TB)









Galles Beowulf Cluster Specs

Queue Name PD-3.00 PD-2.80 Core2 Hadoop
Nodes 3 79 102 16
Cores per Node 2 2 2 2
Total Cores 6 158 204 32
Processor Architecture Intel Pentium D Intel Pentium D Intel Core Duo Intel Core Duo
CPU GHz 3.00 2.80 2.13 / 2.40 / 2.80 / 3.00 2.80  / 3.00

Research Group-Dedicated Machines

CARC also hosts dedicated research computing resources for faculty representing several Colleges and Schools at UNM (Arts & Sciences, Engineering, Fine Arts, Architecture, and the School of Medicine).

Anodyne: Dell PowerEdge R815; AMD Opteron 6174 12-Core 2.2GHz; 1 Node, 4 sockets; 48 cores; 192GB RAM Shared Memory; CentOS 5.10 Operating System. Anodyne is used for applied computational methods of electromagnetic geophysics. (Prof. Chester Weiss).

Apollo (UNM Cancer Center): Coupled Intel/Xeon 32- and 64-bit clusters IBM with FastT500 and DS400 Fiber Channel SAN storage components, supporting the UNM Cancer Center’s Shared Resource for Genomics and Bioinformatics.  These systems host a genome data warehouse based on the parallel Oracle RAC product and utilizing a Web Services-based ELT (Extraction, Transformation and Loading) paradigm importing data into an XMLDB integration schema with multiple output data marts (Prof. C Willman; Prof. SR Atlas).

Bethe (Physics and Astronomy): Dual Co-Processor SuperServer; Intel Xeon / 6 cores / 64 GB RAM; Intel Xeon Phi 5110p / 1 TFLOPS double precision / 8 GB RAM; NVIDIA Titan GPU / 1.4 TFLOPS double precision / 6 GB RAM; 1.0 TB File Storage; SUSE Linux Enterprise Server OS.  GPU/Xeon Phi code development system for astrophysics and molecular biophysics (Prof. H Duan; Prof. SR Atlas).

Deepthought: Penguin Relion 2808GT + Relion 2800i; Intel Xeon E5-2650 V2 2.6 GHz; 4 Nova nodes + 4 CEPH nodes, 16 cores/node; 128 GB RAM/Nova node; 10G Ethernet interconnect; Scyld Cloud Management + OpenStack.  Compute and storage server + RSC staging system for high-throughput cancer genome analysis (10G "Science DMZ" connection to UNM Cancer Center Next-Gen sequencer).  (Prof. S Ness; Prof. C Willman)

Fluvial/Ubik (Earth and Planetary Sciences): A multi-cluster satellite data acquisition and analysis system, operated by CARC for the CREATE resident research group. The system consists of the Fluvial processing system and the Ubik file server. These systems feature a suite of commercial and open real-time satellite image processing software for the Terascan MODIS and AHVRR satellite systems (Prof. L Scuderi).

LWA Data Archive (Physics and Astronomy): Silicon Mechanics Storform iServ R518; Intel Xeon E5620 Quad-Core 2.40GHz; 12MB Cache, 5.86GT/s QPI 8 cores, 24GB RAM; 50 TB RAID 6 Storage; Ubuntu OS. Data storage/server for NSF-supported Long-Wavelength Array Project (Prof. G Taylor).

m3 (Physics and Astronomy): 8 core, 16 GB RAM 64-bit Intel Xeon system, with 12 TB local workspace disk. Serves as analysis engine for the UNM ATLAS particle physics group and as the CARC gateway system connecting to the Open Science Grid (Prof. S Seidel, Prof. I Gorelov).

SkyScan System (ARTS Lab and School of Architecture): SkyScan gDome DigitalSky Cluster; ASUS P8Z68-V LX Custom Build; Intel i7 dual quad core; 8 nodes, 8 cores/node; Gb Ethernet Interconnect; Windows 7 OS.  ARTS Lab supports research in digital graphics, sound, and realtime immersive projection using a 15′ diameter hemispheric domed projection surface (G-Dome Theater Display) with six projectors and five-channel audio (Prof T Castillo; D Beining).

Synergy (Translational Informatics/Internal Medicine): PSSC Labs PowerWulf Compute Engine CBeST v. 3.0 Beowulf; 12 nodes, 8 cores/node; 16GB RAM/node; 4.5 TB Accessible RAID Storage; CentOS Operating System. Compute server for cheminformatics and small-molecule drug discovery (Prof. T Oprea).

Zeno (Mathematics and Statistics): Intel Xeon E5620 2.4 GHz; 4 Nodes, 8 cores/node; 32 GB RAM; 1.8 TB RAID Storage; Ubuntu OS. Compute server for computational geometry and biophysics research (Prof. E Coutsias).

Network Access

Energy Science Network (ESnet) 

The Energy Sciences Network is a high-performance, unclassified national network built to support scientific research. Funded by the U.S. Department of Energy’s Office of Science and managed by Lawrence Berkeley National Laboratory, ESnet provides services to more than 40 Department of Energy (DOE) research sites, including the entire National Laboratory system, its supercomputing facilities, and its major scientific instruments. ESnet also connects to 140 research and commercial networks, permitting DOE-funded scientists to productively collaborate with partners around the world.  UNM partners with ESnet to provide services to New Mexico’s national laboratories, Los Alamos National Laboratory, and Sandia National Laboratories.


Internet2 is the foremost U.S. advanced networking consortium. Led by the research and education community since 1996, Internet2 promotes the missions of its members by providing both leading-edge network capabilities and unique partnership opportunities that together facilitate the development, deployment, and use of next generation Internet technologies. Internet2 brings the U.S. research and academic community together with technology leaders from industry, government and the international community to undertake collaborative efforts that have a fundamental impact on tomorrow’s Internet. 

The Internet2 Network is one component of Internet2’s comprehensive systems approach to developing and deploying advanced networking for the research and education community, which encompasses Network Technologies, Middleware, Security, Performance Measurement, and Community Collaboration. UNM researchers have access to all Internet2-connected resources such as the NSF XSEDE network of supercomputer centers.

Albuquerque GigaPoP (ABQG) 

In 2000, UNM IT established the Albuquerque GigaPoP (ABQG), an aggregation point of networks, to provide high-bandwidth network accessibility to the State of New Mexico.  ABQG is the “on ramp” for all high-speed national networks, including Internet2 and ESNet. Access to Commodity Internet and peering, to keep in-state traffic local, is also available.  ABQG is operated by the University of New Mexico and is a state-of-the-art interconnection facility designed to serve research and education programs in the state. Participants include New Mexico Institute of Mining and Technology, New Mexico State University, New Mexico Council for Higher Education Computing Communication Services (CHECS), and the New Mexico State Agency of IT (DoIT).   

Center for Advanced Research Computing

MSC01 1190
1601 Central Ave. NE
Albuquerque, NM 87106

p: 505.277.8249
f:  505.277.8235