High Performance Computing

HPC2

The College of Engineering deployed a new High Performance Computing Cluster (HPC2) in Fall 2020.  

HPC2 technical details can be found at the HPC Core Facility website. If your research group is not part of the HPC2 Cluster and you would like to join, please send an email to coeithelp@ucdavis.edu so that we can discuss access.

HPC 2 Participating Research Groups

Chemical Engineering
Ambarish Kulkarni
Roland Faller

Civil and Environmental Engineering
Geoff Schladow
Jonathan Herman

Mechanical and Aerospace Engineering
JP Delplanque
Seongkyu Lee

Materials Science and Engineering
Jeremy Mason

Center for Neuroscience
Charan Ranganath
Randall O’Reilly

 


HPC1

The College of Engineering High Performance Computing Cluster (HPC1) contains 60 compute nodes and central storage, all connected on Infiniband networking. Each node contains 64GB of RAM shared by two CPU sockets, each with an 8-core CPU running at 2.4GHz. Central storage is managed by redundant storage servers, with 200 TB of usable storage evenly allocated to researchers. The storage is for temporary computation and is not backed up or duplicated in any way except that it is configured as RAID6 so can withstand up-to-two simultaneous hard drive failures.

Jobs are managed by the SLURM queue manager. Access to the cluster can be granted only to the participating professors and their research groups. If you qualify, enter your access application information here and your professor will be contacted to confirm your access.

Documentation on submitting jobs and other helpful links can be found here.

History

The cluster was built as a shared resource by participating College of Engineering professors with the understanding that the professors and their affiliated research groups will have complete and instantaneous access to the cluster nodes that they purchased. To illustrate, if a professor purchased five nodes of the cluster and wants to immediately run a job on those five nodes, any jobs currently running on those nodes will be immediately stopped and put back into the input queue and the professor’s job will run immediately. If the professor needs more resources than his original purchase (say 10 nodes), he can start a job requesting those resources and may be bumped if the owner of those other nodes requires them.

The compute node configuration is a 1U Dell PowerEdge R630 server with:

  • Two Intel E5-2630 v3 2.4GHz CPU’s with eight cores (16 threads) each
  • 64GB of RDIMM RAM
  • Intel QDR InfiniBand network adapter (30 gigabit, low latency)
  • 1 gigabit Ethernet network adaptor
  • 1 1TB 7200RPM hard drive
  • 10 Gigabit uplink to campus network backbone

Central storage is allocated based on the number of nodes purchased by a PI/Research group at 4TB per node. If 4 nodes are purchased 16TB storage will be allocated to the group. Storage can be expanded if additional nodes are purchased later.

Several compute nodes have the same internal configuration but are Dell PowerEdge R730 2U servers to accommodate the future use of two GPU cards.

HPC 1 Participating Research Groups

Biomedical Engineering
Sharon Aviran
Craig Benham
Yong Duan
Jinyi Qi
Leonor Saiz
Cheemeng Tan

Chemical Engineering
Jennifer Sinclair Curtis
Roland Faller
Ambarish Kulkarni

Civil and Environmental Engineering
Yueyue Fan
Jonathan Herman
Bassam Younis

Computer Science
Computer Science Department
Dipak Ghosal
Yong Jae Lee

Materials Science and Engineering
Jeremy Mason

Mechanical and Aerospace Engineering
Roger Davis
JP Delplanque
Seongkyu Lee

Comments are closed.