Indiana University
  •  
  •  
  •  

Mason

On this page:


System overview


Mason (mason.indiana.edu) at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks for researchers using genome assembly software (particularly software suitable for assembly of data from next-generation sequencers), large-scale phylogenetic software, or other genome analysis applications that require large amounts of computer memory. At IU, Mason accounts are available to IU faculty, postdoctoral fellows, research staff, and students involved in genome research. IU educators providing instruction on genome analysis software, and developers of such software, are also welcome to use Mason. IU has also made Mason available to genome researchers from the National Science Foundation's Extreme Science and Engineering Discovery Environment (XSEDE) project.

Mason consists of 16 Hewlett Packard DL580 servers, each containing four Intel Xeon L7555 8-core processors and 512 GB of RAM. The total RAM in the system is 8 TB. Each server chassis has a 10-gigabit Ethernet connection to the other research systems at IU and the XSEDE network (XSEDENet).

The Mason nodes run Red Hat Enterprise Linux (RHEL 6.x). Job management is provided by the TORQUE resource manager (for more, see Running your applications below) in combination with the Moab job scheduler. Mason employs the Modules system to simplify application and environment configuration (for more, see Computing environment below).

Back to top

System information

Note: The scheduled monthly maintenance window for Mason is the first Tuesday of each month, 7am-7pm.

Systemsummary
Machine type High-performance, data-intensive computing
Operating system Red Hat Enterprise Linux 6.x
Memory model Distributed
Nodes 16 Hewlett Packard DL580 servers
Network 10-gigabit Ethernet per node
Computationalsystemdetails Total Per node
CPUs 64IntelXeonL75558-core processors 4IntelXeonL75558-coreprocessors
Processor cores 512 32
RAM 8 TB 512 GB
Local storage 8 TB 500 GB
Processing capability Rmax=4,302gigaFLOPS Rmax=239gigaFLOPS
Benchmark data HPL gigaFLOPS
3129.75
HPL gigaFLOPS
222.22
Power usage 0.000153 teraFLOPS per watt 0.000173 teraFLOPS per watt
Login nodes Rmax = 478 gigaFLOPS Rmax = 239 gigaFLOPS
Computenodes Rmax = 3,824 gigaFLOPS Rmax = 239 gigaFLOPS

Back to top

File systems (storage for IU users)

You can store files on your home directory or in scratch space:

  • Home directory: Your Mason home directory disk space is allocated on a Network-Attached Storage (NAS) device. You have a 10 GB disk quota, which is shared (if applicable) with your accounts on Big Red II, Quarry, and the Research Database Complex (RDC).

    The path to your home directory is (replace username with your Network ID username):

    /N/u/username/Mason
  • Local scratch: 450 GB of scratch disk space is available locally. Local scratch space is not intended for permanent storage of data, and is not backed up. Files in local scratch space are automatically deleted once they are 14 days old.

    Local scratch space is accessible via either of the following paths:

    /scratch /tmp
  • Shared scratch: Once you have an account on one of the UITS research computing systems, you also have access to 427 TB of shared scratch space.

    Shared scratch space is hosted on the Data Capacitor II (DC2) file system. The DC2 scratch directory is a temporary workspace. Scratch space is not allocated, and its total capacity fluctuates based on project space requirements. The DC2 file system is mounted on IU research systems as /N/dc2/scratch and behaves like any other disk device. If you have an account on an IU research system, you can access /N/dc2/scratch/username (replace username with your IU Network ID username). Access to /N/dc2/projects requires an allocation. For details, see Data Capacitor II and DCWAN. Files in shared scratch space more than 60 days old are periodically purged, following user notification.

    Note: The Data Capacitor II (DC2) high-speed, high-capacity, storage facility for very large data sets replaces the former Data Capacitor file system, which was decommissioned January 7, 2014. The DC2 scratch file system (/N/dc2/scratch) is mounted on Big Red II, Quarry, and Mason. Project directories on the former Data Capacitor were migrated to DC2 by UITS before the system was decommissioned. All data on the Data Capacitor scratch file system (/N/dc/scratch) were deleted when the system was decommissioned. If you have questions about the Data Capacitor's retirement, email the UITS High Performance File Systems group.

For more, see Disk space on IU's research computing systems

Note: IU graduate students, faculty, and staff who need more than 10 GB of permanent storage can apply for accounts on the Research File System (RFS) and the Scholarly Data Archive (SDA). See Applying for your SDA or RFS account

Back to top

Working with electronic protected health information

Although this and other UITS systems and services have been approved by IU by the Office of the Vice President and General Counsel (OVPGC) as appropriate for storing electronic protected health information (ePHI) regulated by the Health Insurance Portability and Accountability Act of 1996 (HIPAA), if you use this or any other IU IT resource for work involving ePHI research data:

  • You and/or the project's principal investigator (PI) are responsible for ensuring the privacy and security of that data, and complying with applicable federal and state laws/regulations and institutional policies. IU's policies regarding HIPAA compliance require the appropriate Institutional Review Board (IRB) approvals and a data management plan.

  • You and/or the project's PI are responsible for implementing HIPAA-required administrative, physical, and technical safeguards to any person, process, application, or service used to collect, process, manage, analyze, or store ePHI data.

Important: Although UITS HIPAA-aligned resources are managed using standards meeting or exceeding those established for managing institutional data at IU, and are approved by the IU Office of the Vice President and General Counsel (OVPGC) for storing research-related ePHI, they are not recognized by the IU Committee of Data Stewards as appropriate for storing other types of institutional data classified as "Critical" that are not ePHI research data. To determine which services are appropriate for storing sensitive institutional data, including ePHI research data, see Comparing supported data classifications, features, costs, and other specifications of file storage solutions and services with storage components available at IU.

For more, see:

The UITS Advanced Biomedical IT Core (ABITC) provides consulting and online help for IU researchers who need help securely processing, storing, and sharing ePHI research data. If you need help or have questions about managing HIPAA-regulated data at IU, contact Anurag Shankar at ABITC. For additional details about HIPAA compliance at IU, see HIPAA & ABITC and the Office of Vice President and General Counsel (OVPGC) HIPAA Privacy & Security page.

Back to top

System access

Access policy

Although Mason is an IU resource dedicated to genome analysis research, access to the cluster is not restricted to IU researchers:

For information about your responsibilities as a user of this resource, see:

Logging into Mason

IU users use their IU Network IDs to log into Mason.

Note: Mason login nodes use the IU Active Directory Service for user authentication. As a result, local passwords/passphrases are not supported. For information about changing your ADS passphrase, see Changing your passphrase For helpful information regarding secure passphrases, see Passwords and passphrases.

Researchers with NCGAS allocations authenticate with the credentials associated with their NCGAS allocations.

Researchers with XSEDE allocations authenticate with their XSEDE-wide logins.

Use of public key authentication is also permitted on Mason. For more, see In SSH and SSH2 for Unix, how do I set up public key authentication?

Methods of access

  • For IU and NCGAS users: Interactive access to Mason is provided via SSH only. Use an SSH2 client to connect to: mason.indiana.edu

    This will resolve to one of the following login nodes:

    h1.mason.indiana.edu h2.mason.indiana.edu
  • XSEDE users: Use GSI-SSH from:

    Alternatively, if you're connected to another XSEDE system via SSH (i.e., not using one of the GSI-SSH methods) you can connect to Mason directly using GSI-SSH and a MyProxy certificate. For example, from Trestles (SDSC):

    1. Make sure the globus module is loaded: [dartmaul@trestles-login2 ~]$ module load globus
    2. Get a certificate from the MyProxy server; when prompted, provide your XSEDE-wide password: [dartmaul@trestles-login2 ~]$ myproxy-logon -s myproxy.teragrid.org Enter MyProxy pass phrase: **************** A credential has been received for user dartmaul in /tmp/x509up_p13346.fileGzdqtd.1.
    3. Use GSI-SSH to connect to your XSEDE account on Mason (replace username with your XSEDE username): [dartmaul@trestles-login2 ~]$ gsissh username@mason.iu.xsede.org

    Additionally, XSEDE users are permitted to connect via SSH using public key authentication. For more, see Access Resources on the XSEDE User Portal.

Back to top

Available software

Software installed on Mason is made available to users via Modules, an environment management system that lets you easily and dynamically add software packages to your user environment. For a list of software modules available on Mason, see Mason Modules in the IU Cyberinfrastructure Gateway.

For more on Modules, see Modules below.

For more on the IU Cyberinfrastructure Gateway, see The IU Cyberinfrastructure Gateway

Note: Users can install software in their home directories on Mason and request the installation of software for use by all users on the system. Only faculty or staff can request software. If students require software packages on Mason, their advisors must request them. For more, see At IU, what is the policy about installing software on Mason?

Back to top

Computing environment

Unix shell

The shell is the primary method of interacting with the Mason cluster. The command line interface provided by the shell lets users run built-in commands, utilities installed on the system, and even short ad hoc programs.

Mason supports the Bourne-again (bash) and TC (tcsh) shells. New user accounts are assigned the bash shell by default. For more on bash, see the Bash Reference Manual and the Bash (Unix shell) Wikipedia page.

To change your shell on Mason, use the changeshell command.

Note: Running chsh (instead of changeshell) changes your shell only on the node on which you run it, and leaves the other nodes of the cluster unchanged; changeshell prompts you with the shells available on the system, and changes your login shell system-wide within 15 minutes.

Environment variables

The shell uses environment variables primarily to modify shell behavior and the operation of certain commands. A good example is the PATH variable.

When the shell parses a command you have entered (i.e., after you hit Enter or Return), it interprets certain words you've typed as program files that should be executed. The shell then searches various directories on the system to locate these files. The PATH variable determines which directories are searched, and the order in which they are searched. In the bash shell, the PATH variable is a string of directories separated by colons (e.g., /bin:/usr/bin:/usr/local/bin). The shell searches for an executable file in the /bin directory, then the /usr/bin directory, and finally the /usr/local/bin directory. If files of the same name (e.g., foo) exist in all three directories, /bin/foo will be run, because the shell will find it first.

To display and change the values of environment variables:

Shell Display a value Change a value
bash echo $VARNAME export VARNAME=VALUE
tcsh echo $VARNAME setenv VARNAME VALUE

Startup scripts

Shells offer much flexibility in terms of startup configuration. On login, bash by default reads and executes commands from the following directories (and in this order):

/etc/profile ~/.bash_profile ~/.bashrc

Note: The ~ (tilde) represents your home directory (e.g., ~/.bash_profile is the .bash_profile file in your home directory).

On logout, the shell reads and executes ~/.bash_logout. For more on bash startup files, see the "Bash Startup Files" section of the Bash Reference Manual.

On login, the tcsh shell reads and executes commands from the following directories (also in this order):

/etc/csh.cshrc /etc/csh.login ~/.tcshrc (if it exists, otherwise ~/.cshrc) ~/.history ~/.login ~/.cshdirs

In practice, on Mason, only the first two files exist. You may create the others, and add commands and variables to them as you see fit.

Modules

Mason use the Modules package to provide a convenient method for dynamically modifying your environment.

Some common Modules commands include:

Command Action
module avail List all software packages available on the system.
module avail package List all versions of package available on the system, for example:

module avail openmpi
module list List all packages currently loaded in your environment.
module load package/version Add the specified version of the package to your environment, for example:

module load intel/11.1

To load the default version of the package, use:

module load package
module unload package Remove the specified package from your environment.
module swap package_A package_B Swap the loaded package (package_A) with another package (package_B).

This is synonymous with:

module switch package_A package_B
module show package Shows what changes will be made to your environment (e.g., paths to libraries and executables) by loading the specified package.

This is synonymous with:

module display package

For more about the Modules package, see the module manual page and the modulefile manual page.

For information about using Modules on IU research systems, see On Big Red II, Mason, Quarry, and Rockhopper at IU, how do I use Modules to manage my software environment? For information about using Modules on XSEDE digital services, see On XSEDE, how do I manage my software environment using Modules?

Back to top

Transferring your files to Mason

Mason supports SCP and SFTP for transferring files:

  • SCP: The SCP command line utility is included with OpenSSH. Basic use is: scp username@host1:file1 username@host2:file2

    For example, to copy foo.txt from the current directory on your computer to your home directory on Mason, use (replacing username with your username): scp foo.txt username@mason.indiana.edu:foo.txt

    You may specify absolute paths or paths relative to your home directory:

    scp foo.txt username@mason.indiana.edu:some/path/for/data/foo.txt

    You also may leave the destination filename unspecified, in which case it will become the same as the source filename. For more, see In Unix, how do I use SCP to securely transfer files between two computers?

  • SFTP: SFTP clients provide file access, transfer, and management, and offer functionality similar to FTP clients. For example, using a command-line SFTP client (e.g., from a Linux or Mac OS X workstation), you could transfer files as follows: $ sftp username@mason.indiana.edu username@mason.indiana.edu's password: Connected to mason.indiana.edu. sftp> ls -l -rw------- 1 username group 113 May 19 2011 loadit.pbs.e897 -rw------- 1 username group 695 May 19 2011 loadit.pbs.o897 -rw-r--r-- 1 username group 693 May 19 2011 local_limits sftp> put foo.txt Uploading foo.txt to /N/hd00/username/Mason/foo.txt foo.txt 100% 43MB 76.9KB/s 09:39 sftp> exit $

    For more, see Transferring files with SFTP

Additionally, XSEDE researchers can use GridFTP (via globus-url-copy or Globus Online) to securely move data to and from Mason's GridFTP endpoint:

gsiftp://gridftp.mason.iu.xsede.org:2811/

For more on XSEDE data transfers, see What data transfer methods are supported on XSEDE, and where can I find more information about data transfers?

Back to top

Application development

Programming models

Mason is designed to support codes that have extremely large memory requirements. As these codes typically do not implement a distributed memory model, Mason is geared toward a serial or shared-memory parallel programming paradigm. However, Mason can support distributed memory parallelism.

Compiling

The GNU Compiler Collection (GCC) is added by default to your user environment on Mason. The Intel and Portland Group (PGI) compiler collections, and the Open MPI and MPICH wrapper compilers, are also available.

Recommended optimization options are -O3 and -xHost (the -xHost option will optimize based on the processor of the current host).

For the GCC compilers, the mtune=native and march=native options are recommend to generate instructions for the machine and CPU type.

Examples

Serial programs:

  • To compile the C program simple.c:

    • With the GCC compiler: gcc -O2 -o -mtune=native -march=native simple simple.c
    • With the Intel compiler: icc -o simple simple.c
  • To compile the Fortran program simple.f:

    • With the GCC compiler: g77 -o simple simple.f
    • With the Intel compiler: ifort -O2 -o simple -lm simple.f

Parallel programs:

  • To compile the C program simple.c with the MPI wrapper script: mpicc -o simple simple.c
  • To compile the Fortran program simple.f with the MPI wrapper script: mpif90 -o simple -O2 simple.f
  • To use the GCC C compiler to compile simple.c to run in parallel using OpenMP: gcc -O2 -fopenmp -o simple simple.c
  • To use the Intel Fortran compiler to compile simple.f to run in parallel using OpenMP: ifort -openmp -o simple -lm simple.f

Libraries

Both the Intel Math Kernel Library (MKL) and the AMD Core Math Library (ACML) are available on Mason.

Debugging

Both the Intel Debugger (IDB) and the GNU Project Debugger (GDB) are available on Mason.

For information about using the IDB, see the Intel IDB page.

For information about using the GDB, see the GNU GDB page. For an example, see Step-by-step example for using GDB within Emacs to debug a C or C++ program.

Back to top

Running your applications

CPU/Memory limits and batch jobs

User processes on the login nodes are limited to 20 minutes of CPU time. Processes exceeding this limit are automatically terminated without warning. If you require more than 20 minutes of CPU time, use the TORQUE qsub command to submit a batch job (see Submitting a job below).

Implications of these limits on the login nodes are as follows:

  • The Java Virtual Machine must be invoked with a maximum heap size. Because of the way Java allocates memory, under ulimit conditions an error will occur if Java is called without the -Xmx##m flag.

  • Memory-intensive jobs started on the login nodes will be killed almost immediately. Debugging and testing on Mason should be done by submitting a request for an interactive job via the batch system, for example: qsub -I -q shared -l nodes=1:ppn=4,vmem=10gb,walltime=4:00:00

    The interactive session will start as soon as the requested resources are available.

Short definitions

  • A job is an instance of an application you wish to run.
  • A queue is a pool of compute resources that accepts job to run, and executes them according to a fairshare policy.
  • A job scheduler is the application responsible for scheduling jobs.

Queues

The BATCH queue is the default, general-purpose queue on Mason. The default walltime is one hour; the maximum limit is two weeks. If your job requires more than two weeks of walltime, email the High Performance Systems group for assistance.

Queue policies

The Moab job scheduler uses fairshare scheduling to track usage and prioritize jobs. For information on fairshare scheduling and using Moab to check the status of your jobs on Mason, see What is Moab? For a summary of commands, see Common Moab scheduler commands.

Submitting a job

To submit a job to run on Mason, use the qsub command. If the command exits successfully, it will return a job ID, for example:

[jdoe@Mason]$ qsub job.script 123456.m1.mason [jdoe@Mason]$

If you need attribute values different from the defaults, but less than the maximum allowed, specify these either in the job script using TORQUE directives, or on the command line with the -l switch. For example, to submit a job that needs more than the default 60 minutes of walltime, use:

qsub -l walltime=10:00:00 job.script

Jobs on Mason default to a per-job virtual memory resource of 8 MB. So, for example, to submit a job that needs 100 GB of virtual memory, use:

qsub -l nodes=1:ppn=4,vmem=100gb job.script

Note: Command-line arguments override directives in the job script, and you may specify many attributes on the command line, either as comma-separated options following the -l switch, or each with its own -l switch. The following two commands are equivalent:

qsub -l nodes=1:ppn=16,vmem=1024mb job.script qsub -l nodes=1:ppn=16 -l vmem=1024mb job.script

Useful qsub options include:

Option Action
-a <date_time> Execute the job only after specified date and time.
-I Run the job interactively. (Interactive jobs are forced to not re-runnable.)
-m e Mail a job summary report when the job terminates.
-q <queue name> Specify the destination queue for the job. (Not applicable on Mason.)
-r [y|n] Declare whether the job is re-runnable. Use the argument n if the job is not re-runnable. The default value is y (re-runnable).
-V Export all environment variables in your current environment to the job.

For more, see the qsub manual page.

Monitoring a job

To monitor the status of a queued or running job, use the TORQUE qstat command. Useful qstat options include:

Option Action
-a Display all jobs.
-f Write a full status display to standard output.
-n List the nodes allocated to a job.
-r Display jobs that are running.
-u user1@host,user2@host Display jobs owned by specified users.

For more, see the qstat manual page.

Deleting a job

To delete a queued or running job, use the qdel command.

Occasionally, a node will become unresponsive and unable to respond to the TORQUE server's requests to kill a job. In such cases, try using qdel -W <delay> to override the delay between SIGTERM and SIGKILL signals (for <delay>, specify a value in seconds).

For more, see the qdel manual page.

Back to top

Reference

Back to top

Support

For IU and NCGAS users

Support for IU and NCGAS users is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups, and by the National Center for Genome Analysis Support (NCGAS):

  • If you have system-specific questions about Mason, email the HPS group.

  • If you have questions about compilers, programming, scientific/numerical libraries, or debuggers on Mason, email the SciAPT group.

  • If you need help installing software packages in your home directory on Mason, email NCGAS.

For XSEDE users

XSEDE users with questions about hardware or software on Mason should contact the XSEDE Help Desk. For more, see How do I get help with XSEDE?

For more about XSEDE compute, advanced visualization, storage, and special purpose systems, see the Resources Overview, Systems Monitor, and User Guides. For scheduled maintenance windows, outages, and other announcements related to XSEDE digital services, see User News.

Back to top

This document was developed with support from National Science Foundation (NSF) grant OCI-1053575. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.