Torus is now operating beyond its end-of-life (ie - it is over 5 years old and out of warranty). Unfortunately it also suffered a severe software issue we have not yet managed to resolve. While we hope to bring it back in operation at some point to offset the load on the other clusters, we cannot guarantee this will happen.
Torus is a medium-scale high performance linux cluster purchased by a collaborative arrangement between CIBR, and the Ludtke, Chiu and Barth labs. Each group that contributed financially to the purchase of the cluster is entitled to a proportional amount of the overall compute capacity of the cluster. In theory, the cluster can provide up to 5,000,000 CPU-hr of computation annually, however, it is impossible to keep such a cluster fully loaded all of the time, and typical clusters run at ~60-80% of capacity, and allocations reflect this.
CIBR faculty can acquire time allocations on this cluster simply by requesting them via email to email@example.com. Initial allocations of 10,000 CPU-hr do not require any formal application, simply a request. The request must come from the PI, not students/postdocs, as allocations are made on a per-faculty basis, and the professor must decide how to allocate the time among people in his/her lab. Faculty must be members of CIBR to receive free time on the cluster (membership is free). If the initial allocation is exhausted, larger allocations may be possible, depending on usage levels, and the number of requests in that quarter.
Hardware & Software
- 1 Head node for job management
- 1 Storage node for home directories and mid-term data storage with a 40 TB Raid array
- 48 compute nodes, each with:
- 12 cores (dual 6-core CPUs)
- 48 GB of RAM (4 GB/core)
- 2 TB hard drive (1 TB local scratch, 1 TB Lustre)
- 4 additional "bioinformatics" nodes used exclusively for "bioinfo" queue, each with:
- 16 cores (dual 8-core CPUs)
- 256 GB of RAM (16 GB/core)
- 1 80GB SSD hard drive for system use. 7 1TB 2.5" hard drives raid5, mounted as /data1 (high speed local storage).
QDR Infiniband (40 Gb/sec) interconnect between nodes for high-performance MPI & Lustre filesystem
Unlike our previous collaborative clusters, Torus is equipped with a QDR Infiniband interconnect, capable of very high bandwidth transfers between nodes.
Please note: The primary RAID array for active storage of data is not backed up in any way. RAID6 provides some redundancy and protection against routine drive failures, but if multiple drives fail at once, or some other hardware problem occurs, it is possible to lose everything stored on the primary RAID. For this reason we also offer more reliable backup storage for data not actively being processed, but will still be needed on the cluster. Any user can request an allocation on this reliable backup space.
The cluster runs CentOS 6, a variant of linux, which is equivalent to RHEL6.
The Torque queuing system with Maui scheduler, basically equivalent to PBS. This is how users submit jobs for execution on the cluster nodes.
OpenMPI is available, and users are free to compile and use other MPI distributions which can take advantage of the available Infiniband interconnect.
- The cluster has a wide range of open-source programs and libraries installed on it as part of the CentOS distribution. Within limits, new packages requested by users can be added.
- There is, at present, no commercial software available on Torus. If you require a specific commercial software package, please contact us.
- BCM's Matlab license is not for cluster use, though it is possible to install a license in your own account for use on one node at a time. To do this you need to use PBS in interactive mode. You MUST still use PBS, not directly log in to a node and run matlab.
Detailed Information on Using the Cluster
The cluster is administered in a fairly laissez-faire fashion. Generally speaking, all users, paid and free alike share the same queuing system. While there are specific queues for high and low priority as well as long-running jobs, there is no mechanism to preempt a running job. That is, once a job is started, it will run (occupying assigned resources) until complete. The priority system only impacts the order in which jobs are started. Please see the documentation below for more details. Under no circumstances should compute jobs be executed on the head-node, or directly on any compute nodes without going through the queuing system. It is permissible to run short I/O intensive jobs on the head node (pre-processing data and such) since the head-node has more efficient storage access.
It is the user's responsibility to follow all cluster policies. While we try to be understanding of mistakes, we would much prefer to answer a question rather than spend two days fixing an accidental problem. Users who intentionally abuse policy may have their accounts temporarily or permanently suspended. In such situations, the user's PI will be consulted.
The cluster is maintained by Brandon Smith, who works for the NCMI, and maintains the clusters only part-time. In general, he will be happy to help you and answer your questions. In general, if you don’t understand something about how to use the cluster effectively, or have any questions/issues, don’t hesitate to email ( firstname.lastname@example.org ) or Steve Ludtke ( email@example.com ).
Last modified on July 20, 2015