Differences between revisions 4 and 5
Revision 4 as of 2014-04-20 02:41:18
Size: 7325
Editor: SteveLudtke
Comment:
Revision 5 as of 2014-05-16 21:45:47
Size: 7382
Editor: SteveLudtke
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
[[CIBRClusters|Return to the CIBR Cluster Home Page]]

Return to the CIBR Cluster Home Page

Introduction to Clusters and CIBR Philosophy

Glossary & Basic Concepts

First, a little terminology. Here is the hierarchy of computing resources on a modern cluster (you may find some disagreement about some of these specific definitions):

  • Server - a single physical box mounted in a rack in the cluster

  • Node - One server often contains multiple independent computers, or nodes. A 'node' is an independently operating computer.

  • Processor or CPU - A physical microprocessor chip inserted into the motherboard on the computer. Most linux cluster nodes have 2 processors, though there are exceptions

  • Core - A semi-independent computing unit inside a processor. The cores within a single physical package do share some resources (like cache), but can largely be treated as independent. Modern processors typically have 2-12 cores.

  • Thread - A fairly recent concept. Each core can do several non-overlapping things at the same time. Sometimes some of these resources are idle due to conflicts between sequential operations. threads allow you to recover some of this wasted capacity by running 2 jobs on each core. On current generation processors, using this thread system can recoup as much as 10-20% of otherwise wasted capacity. However, using it is tricky. You will also often see vendors (like Amazon) advertising threads as if they were cores. This is patently false.

Non-CPU terms:

  • RAM - The memory on a single node. This memory is shared among all of the cores on the node, and often not explicitly allocated to specific tasks.

  • Swap - A concept used to avoid memory exhaustion on computers. When the RAM becomes full, the computer may temporarily store some information on the hard drive in specially allocated swap space. Note, however, that a typical hard drive can read/write information at ~150 MB/sec. RAM can read/write information at ~75,000 MB/sec. Forcing a modern cluster to swap is a very bad thing, and the CIBR clusters have little or no swap space available.

  • Infiniband - A high speed network used to help make many separate computers seem more like a single computer. Allows nodes to exchange information at up to 800MB/sec on our clusters(QDR single channel).

  • Gigabit - Normal network used between nodes (and in the computers in your lab), can transfer ~100 MB/sec. The connection into the cluster from the outside world is a Gigabit connection.

  • RAID - A type of disk storage combining multiple hard drives into a single larger 'virtual' drive which is both faster, and provides some safety against drive failure.

    • RAID5 - With N hard drives, provides read performance improvement of up to N times, and provides N-1 drives worth of storage space. If any single hard drive fails at a time, no data is lost.

    • RAID6 - Same as RAID5, but can tolerate 2 drive failing at once, and provides N-2 drives worth of space.

  • NFS - Networked FileSystem. Makes a hard drive/RAID on one computer available on other computers. Network communications may be a bottleneck, and the total bandwidth is shared among all computers.

CPU resources

If we consider the Prism cluster, for example, it contains (as of the time of writing) 11 servers, each of which has 4 nodes, each of which has 2 processors, each of which has 8 cores. 11*4*2*8 = 704 cores. Technically this could be interpreted as 1408 threads, but whereas 2 cores provide 2x performance, 2 threads on a single core typically provides ~1.1-1.2x performance, and can only be used in very specific situations.

Network and Storage

Our clusters are typically configured with QDR infiniband interconnects, meaning nodes can exchange data among each other at a rate of ~800 MB/sec. Note that the RAID6 storage has a single QDR connection, so the total file i/o bandwidth available to all of the nodes is ~800 MB/sec. While this number may seem higher than the ~150 MB/sec you can get from the scratch hard drive inside each node, it is shared among 44 nodes. If all nodes try to do file I/O at once, each will only have ~18 MB/sec of bandwidth available (10x slower than the scratch space).

Memory (RAM)

We currently configure our clusters with 4GB of memory per core. This means a node on Prism has 64 GB of total RAM, shared among 16 cores.

Using Resources on a Cluster

Running jobs on a cluster is a little bit art and a little bit science. For any project you are considering running on a cluster, you should do the following pre-analysis:

  • Data size - reading and writing data takes a (comparatively) long time. Consider your project and the software you're running and estimate the total size of the data you will process and the number of times each data element will need to be read from disk.
  • CPU - How much computation will your task require, assuming reading the data from disk is instantaneous? That is, for one job working with the amount of data you determined above, how long would just the computation take on a single core?
  • Memory requirements - How much data needs to be in the computer's memory at a time, per process?
  • Parallelism - can your task take advantage of the shared memory on a single node? If your job supports threaded parallelism, then you could use all 64 GB of RAM (for example) and all 16 cores at the same time. Otherwise, you would be limited to 4 GB/core, or would have to leave some cores idle.

Take the data size and divide by 100 MB/sec. How does this amount of time compare to the CPU time estimate you made. If the CPU time estimate is an order of magnitude or more larger than the number you got from the data, then your job may be well suited for a cluster. If your memory requirements are larger than 4GB/core, and your job does not support threaded parallelism, then you may need to consider a non-CIBR cluster where they have focused more $$$ on large memory configurations. If the project is small, we do have 4 specialized "bioinformatics nodes" on the Torus cluster which may suit your needs.

Again, compute clusters excel at running very CPU-intensive tasks with low file I/O requirements. Tasks such as molecular dynamics or other types of simulation are good examples of this. A task at the other extreme, such as searching a genome for possible primer sequences, is probably not something that should even be attempted on a cluster. Most tasks are somewhere between these extremes. For example, CryoEM single particle reconstruction does work with many GB of data, but the amount of processing required per byte of data is high enough that clusters can be efficiently used.

If your project is very data intensive, it may be worth considering an in-lab workstation configured with a high performance RAID array instead. Such a machine can be purchased for well under $10k, and (in 2014) can provide as much as ~1.5 GB/second read/write speeds. For rapidly processing large amounts of sequence data, machines like this can be much more time and cost-efficient than any sort of cluster.

If you have any questions about this for your specific project, please just email sludtke@bcm.edu and we can chat about it.

CIBRClusters/Introduction (last edited 2017-01-16 07:43:13 by SteveLudtke)