On this page:
- System overview
- System information
- File systems (storage for IU users)
- Working with ePHI research data
- System access
- Available software
- Computing environment
- Transferring your files to Mason
- Application development
- Queue information
- Requesting single user time
- Running jobs on Mason
mason.indiana.edu) at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks for researchers using genome assembly software (particularly software suitable for assembly of data from next-generation sequencers), large-scale phylogenetic software, or other genome analysis applications that require large amounts of computer memory. At IU, Mason accounts are available to IU faculty, postdoctoral fellows, research staff, and students involved in genome research. IU educators providing instruction on genome analysis software, and developers of such software, are also welcome to use Mason. IU has also made Mason available to genome researchers from the National Science Foundation's Extreme Science and Engineering Discovery Environment (XSEDE) project.
Mason consists of 18 Hewlett Packard DL580 servers, each containing four Intel Xeon L7555 8-core processors and 512 GB of RAM. The total RAM in the system is 8 TB. Each server chassis has a 10-gigabit Ethernet connection to the other research systems at IU and the XSEDE network (XSEDENet).
The Mason nodes run Red Hat Enterprise Linux (RHEL 6.x). Job management is provided by the TORQUE resource manager (for more, see Running your applications below) in combination with the Moab Workload Manager. Mason employs the Modules system to simplify application and environment configuration (for more, see Computing environment below).
Note: The scheduled monthly maintenance window for Mason is the first Tuesday of each month, 7am-7pm.
|Machine type||High-performance, data-intensive computing|
|Operating system||Red Hat Enterprise Linux 6.x|
|Nodes||18 Hewlett Packard DL580 servers|
|Network||10-gigabit Ethernet per node|
|Computational system details||Total||Per node|
|CPUs||64 Intel Xeon L7555 8-core processors||4 Intel Xeon L7555 8-core processors|
|RAM||8 TB||512 GB|
|Local storage||8 TB||500 GB|
|Processing capability||Rmax = 4,302 gigaFLOPS||Rmax = 239 gigaFLOPS|
|Benchmark data||HPL gigaFLOPS
|Power usage||0.000153 teraFLOPS per watt||0.000173 teraFLOPS per watt|
|Login nodes||Rmax = 478 gigaFLOPS||Rmax = 239 gigaFLOPS|
|Compute nodes||Rmax = 3,824 gigaFLOPS||Rmax = 239 gigaFLOPS|
File systems (storage for IU users)
You can store files on your home directory or in scratch space:
Home directory: Your Mason home directory disk
space is allocated on a Network-Attached Storage (NAS) device. You
have a 10 GB disk quota, which is shared (if applicable) with your
accounts on Big Red II, Quarry, and the Research Database Complex
The path to your home directory is (replace/N/u/username/Mason
usernamewith your Network ID username):
Local scratch: 450 GB of scratch disk space is
available locally. Local scratch space is not intended for permanent
storage of data, and is not backed up. Files in local scratch space
are automatically deleted once they are 14 days old.
Local scratch space is accessible via either of the following paths:/scratch /tmp
Shared scratch: Once you have an account on one
of the UITS research computing systems, you also have access to 427 TB
of shared scratch space.
Shared scratch space is hosted on the Data Capacitor II (DC2) file system. The DC2 scratch directory is a temporary workspace. Scratch space is not allocated, and its total capacity fluctuates based on project space requirements. The DC2 file system is mounted on IU research systems as
/N/dc2/scratchand behaves like any other disk device. If you have an account on an IU research system, you can access
usernamewith your IU Network ID username). Access to
/N/dc2/projectsrequires an allocation. For details, see Data Capacitor II and DCWAN. Files in shared scratch space more than 60 days old are periodically purged, following user notification.
Note: The Data Capacitor II (DC2) high-speed, high-capacity, storage facility for very large data sets replaces the former Data Capacitor file system, which was decommissioned January 7, 2014. The DC2 scratch file system (
/N/dc2/scratch) is mounted on Big Red II, Quarry, and Mason. Project directories on the former Data Capacitor were migrated to DC2 by UITS before the system was decommissioned. All data on the Data Capacitor scratch file system (
/N/dc/scratch) were deleted when the system was decommissioned. If you have questions about the Data Capacitor's retirement, email the UITS High Performance File Systems group.
For more, see Disk space on IU's research computing systems
Note: IU graduate students, faculty, and staff who need more than 10 GB of permanent storage can apply for accounts on the Research File System (RFS) and the Scholarly Data Archive (SDA). See Applying for your SDA or RFS account
Working with ePHI research data
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of personal health data. The HIPAA Security Rule set national standards specifically for the security of protected health information (PHI) that is created, stored, transmitted, or received electronically (i.e., electronic protected health information, or ePHI). To ensure the confidentiality, integrity, and availability of ePHI data, the HIPAA Security Rule requires organizations and individuals to implement a series of administrative, physical, and technical safeguards when working with ePHI data.
Although you can use this system for processing or storing electronic protected health information (ePHI) related to official IU research:
- You and/or the project's principal investigator (PI) are
responsible for ensuring the privacy and security of that data, and
complying with applicable federal and state laws/regulations and
institutional policies. IU's policies regarding HIPAA compliance
require the appropriate Institutional Review Board (IRB) approvals and
a data management plan.
- You and/or the project's PI are responsible for implementing HIPAA-required administrative, physical, and technical safeguards to any person, process, application, or service used to collect, process, manage, analyze, or store ePHI data.
The UITS Advanced Biomedical IT Core (ABITC) provides consulting and online help for Indiana University researchers who need help securely processing, storing, and sharing ePHI research data. If you need help or have questions about managing HIPAA-regulated data at IU, contact the ABITC. For additional details about HIPAA compliance at IU, see HIPAA & ABITC and the Office of Vice President and General Counsel (OVPGC) HIPAA Privacy & Security page.
Important: Although UITS HIPAA-aligned resources are managed using standards surpassing official standards for managing institutional data at IU and are appropriate for storing HIPAA-regulated ePHI research data, they are not recognized by the IU Committee of Data Stewards as appropriate for storing institutional data elements classified as Critical that are not ePHI data. For help determining which institutional data elements classified as Critical are considered ePHI, see Which data elements in the classifications of institutional data are considered protected health information (PHI)?
The IU Committee of Data Stewards and the University Information Policy Office (UIPO) set official classification levels and data management standards for institutional data in accordance with the university's Management of Institutional Data policy. If you have questions about the classifications of institutional data, contact the appropriate Data Steward. To determine the most sensitive classification of institutional data you can store on any given UITS service, see the "Choosing an appropriate storage solution" section of At IU, which dedicated file storage services and IT services with storage components are appropriate for sensitive institutional data, including ePHI research data?
Note: In accordance with standards for access control mandated by the HIPAA Security Rule, you are not permitted to access ePHI data using a group (or departmental) account. To ensure accountability and enable only authorized users to access ePHI data, IU researchers must use their personal Network ID credentials for all work involving ePHI data.
Although Mason is an IU resource dedicated to genome analysis research, access to the cluster is not restricted to IU researchers:
- At IU, students, faculty, and staff can request accounts on Mason
via the Account Management
Service (AMS); see Instructions for getting additional computing accounts at IU
- NSF-funded life sciences researchers at other institutions can
apply to the National Center for Genome Analysis Support (NCGAS) allocations committee to request
access to Mason. To request an allocation, submit the NCGAS
Allocations Request Form. If you have questions, email NCGAS.
- Access to Mason is also available to Extreme Science and Engineering Discovery Environment (XSEDE) researchers through the normal XSEDE allocation process.
For information about your responsibilities as a user of this resource, see:
IU and NCGAS users: Your responsibilities as a computer user at IU
- XSEDE users: What are my responsibilities as an XSEDE user?
IU and NCGAS users: Use an SSH2 client to connect
This resolves to one of the following login nodes:h1.mason.indiana.edu h2.mason.indiana.edu
IU users authenticate with their Network ID usernames and passphrases.
NCGAS researchers authenticate with the credentials associated with their NCGAS allocations.
IU and NCGAS users also may set up public key authentication; see In SSH and SSH2 for Unix, how do I set up public key authentication?
Note: Mason login nodes use the IU Active Directory Service for user authentication. As a result, local passwords/passphrases are not supported. For information about changing your ADS passphrase, see Changing your passphrase For helpful information regarding secure passphrases, see Passwords and passphrases.
XSEDE users: Use GSI-SSH with your
XSEDE-wide login from the Single Sign-On Login
Hub or one of the GSI-SSH desktop clients.
Alternatively, if you're connected to another XSEDE system via SSH (i.e., not using one of the GSI-SSH methods), you can connect to Mason directly using GSI-SSH and a MyProxy certificate. For example, from Trestles (SDSC):
- Make sure the
globusmodule is loaded: [dartmaul@trestles-login2 ~]$ module load globus
- Get a certificate from the MyProxy server; when prompted, provide your XSEDE-wide password: [dartmaul@trestles-login2 ~]$ myproxy-logon -s myproxy.teragrid.org Enter MyProxy pass phrase: **************** A credential has been received for user dartmaul in /tmp/x509up_p13346.fileGzdqtd.1.
- Use GSI-SSH to connect to your XSEDE account on Mason (replace
usernamewith your XSEDE username): [dartmaul@trestles-login2 ~]$ gsissh firstname.lastname@example.org
Additionally, XSEDE users can set up public key authentication. For more, see Access Resources on the XSEDE User Portal.
- Make sure the
Mason uses the Modules package to provide a convenient method for dynamically adding software packages to your user environment.
Note: Mason users are free to install software in their home directories and may request the installation of software for use by all users on the system. Only faculty or staff can request software. If students require software packages on Mason, their advisors must request them. For details, see At IU, what is the policy about installing software on Mason?
Some common Modules commands include:
||List all software packages available on the system.|
||List all versions of
module avail openmpi
||List all packages currently loaded in your environment.|
||Add the specified
module load intel/11.1
To load the default version of the
||Remove the specified
||Swap the loaded package (
This is synonymous with:module switch package_A package_B
||Shows what changes will be made to your environment (e.g., paths to
libraries and executables) by loading the specified
This is synonymous with:module display package
To make permanent changes to your environment, edit your
~/.modules file. For more, see In Modules, how do I save my environment with a .modules file?
For information about using Modules on IU research systems, see On Big Red II, Mason, Quarry, and Rockhopper at IU, how do I use Modules to manage my software environment? For information about using Modules on XSEDE digital services, see On XSEDE, how do I manage my software environment using Modules?
The Linux shell is both a command interpreter and a programming language. The shell command line provides a user interface for invoking commands to execute various shell functions, built-in utilities, and executable files. You can combine shell commands in a text file to create a shell script, and then invoke the shell script on the command line. As a result, the shell reads and executes the commands from the shell script.
Mason supports the Bourne-again shell (
bash) and the
TENEX C shell (
tcsh). New users are assigned the
bash shell by default. To change your shell on Mason, use
chsh (instead of
changeshell) changes your shell only on the node on which
you run it, and leaves the other nodes of the cluster unchanged;
changeshell prompts you with the shells available on the
system, and changes your login shell system-wide within 15
Environment variables are named values that can affect shell behavior and the operation of certain commands.
The PATH variable contains a string of directories separated by colons:/bin:/usr/bin:/usr/local/bin
This tells the shell where (in which directories) and how (in what order) to search for functions, built-in utilities, or executable files that correspond to user-entered commands. According to the above example, the shell would search the following directories in this order:/bin /usr/bin /usr/local/bin
If executables with identical filenames exist in more than one of
the directories in the PATH variable (e.g.,
/usr/local/bin/hyperdirve, the shell will execute the
first file it finds (e.g.,
To display the values of environment variables set for your user environment on Mason, on the command line, enter:env
To display the value of a particular environment variable (e.g.,
To change the value of an environment variable (e.g.,
bash: export VARNAME=NEW_VALUE
tcsh: setenv VARNAME NEW_VALUE
When you log into Mason, your shell executes commands from certain startup files.
Depending on your shell, when you log into Mason:
bashshell reads and executes commands from the following startup files (and in this order): /etc/profile ~/.bash_profile ~/.bashrc
~(tilde) represents your home directory (e.g.,
.bash_profilefile in your home directory).
tcshshell reads and executes commands from the following startup files (and in this order): /etc/csh.cshrc /etc/csh.login
Transferring your files to Mason
Mason supports SCP and SFTP for transferring files:
SCP: The SCP command line utility is included
with OpenSSH. Basic use is:
scp username@host1:file1 username@host2:file2
For example, to copy
foo.txtfrom the current directory on your computer to your home directory on Mason, use (replacing
usernamewith your username): scp foo.txt email@example.com:foo.txt
You may specify absolute paths or paths relative to your home directory:scp foo.txt firstname.lastname@example.org:some/path/for/data/foo.txt
You also may leave the destination filename unspecified, in which case it will become the same as the source filename. For more, see In Unix, how do I use SCP to securely transfer files between two computers?
SFTP: SFTP clients provide file access, transfer,
and management, and offer functionality similar to FTP
clients. For example, using a command-line SFTP client (e.g., from a
Linux or Mac OS X workstation), you could transfer files as follows:
$ sftp email@example.com
Connected to mason.indiana.edu.
sftp> ls -l
-rw------- 1 username group 113 May 19 2011 loadit.pbs.e897
-rw------- 1 username group 695 May 19 2011 loadit.pbs.o897
-rw-r--r-- 1 username group 693 May 19 2011 local_limits
sftp> put foo.txt
Uploading foo.txt to /N/hd00/username/Mason/foo.txt
foo.txt 100% 43MB 76.9KB/s 09:39
For more, see Transferring files with SFTP
Additionally, XSEDE researchers can use GridFTP (via
globus-url-copy or Globus Online) to securely
move data to and from Mason's GridFTP endpoint:
For more on XSEDE data transfers, see What data transfer methods are supported on XSEDE, and where can I find more information about data transfers?
Mason is designed to support codes that have extremely large memory requirements. As these codes typically do not implement a distributed memory model, Mason is geared toward a serial or shared-memory parallel programming paradigm. However, Mason can support distributed memory parallelism.
The GNU Compiler Collection (GCC) is added by default to your user environment on Mason. The Intel and Portland Group (PGI) compiler collections, and the Open MPI and MPICH wrapper compilers, are also available.
Recommended optimization options are
-xHost option will optimize
based on the processor of the current host).
For the GCC compilers, the
march=native options are recommend to generate
instructions for the machine and CPU type.
Following are example commands for compiling serial and parallel programs on Mason:
- To compile the C program
- With the GCC compiler: gcc -O2 -o -mtune=native -march=native simple simple.c
- With the Intel compiler: icc -o simple simple.c
- To compile the Fortran program
- With the GCC compiler: g77 -o simple simple.f
- With the Intel compiler: ifort -O2 -o simple -lm simple.f
- To compile the C program
- To compile the C program
simple.cwith the MPI wrapper script: mpicc -o simple simple.c
- To compile the Fortran program
simple.fwith the MPI wrapper script: mpif90 -o simple -O2 simple.f
- To use the GCC C compiler to compile
simple.cto run in parallel using OpenMP: gcc -O2 -fopenmp -o simple simple.c
- To use the Intel Fortran compiler to compile
simple.fto run in parallel using OpenMP: ifort -openmp -o simple -lm simple.f
- To compile the C program
Both the Intel Math Kernel Library (MKL) and the AMD Core Math Library (ACML) are available on Mason.
Both the Intel Debugger (IDB) and the GNU Project Debugger (GDB) are available on Mason.
For information about using the IDB, see the Intel IDB page.
For information about using the GDB, see the GNU GDB page. For an example, see Step-by-step example for using GDB within Emacs to debug a C or C++ program.
The BATCH queue is the default, general-purpose queue on Mason. The default walltime is one hour; the maximum limit is two weeks. If your job requires more than two weeks of walltime, email the High Performance Systems group for assistance.
Note: To best meet the needs of all research
projects affiliated with Indiana University, the High Performance
Systems (HPS) team
administers the batch job queues on UITS Research Technologies
supercomputers using resource management and job scheduling policies
that optimize the overall efficiency and performance of workloads on
those systems. If the structure or configuration of the batch queues
on any of IU's supercomputing systems does not meet the needs of your
research project, fill out and submit the Research Technologies Ask RT for Help form (for
"Help Needed", select
High Performance Systems job or queue
Requesting single user time
Although UITS Research Technologies cannot provide dedicated access
to an entire compute system during the course of normal operations,
"single user time" is made available by request one day a month during
each system's regularly scheduled
maintenance window to accommodate IU researchers with tasks
requiring dedicated access to an entire compute system. To request
single user time on one of IU's research computing systems, fill out
and submit the Research Technologies Ask RT for Help form (for
"Help Needed", select
Request to run jobs in single user time on
HPS systems). If you have questions about single user time on IU
research computing systems, email the HPS
Running jobs on Mason
Mason uses the TORQUE resource manager (based on OpenPBS) and the Moab Workload Manager to manage and schedule jobs. For information about using TORQUE on Mason, see What is TORQUE, and how do I use it to submit and manage jobs on high-performance computing systems? Moab uses fairshare scheduling to track usage and prioritize jobs. For information on fairshare scheduling and using Moab to check the status of batch jobs, see:
CPU/Memory limits and batch jobs
User processes on the login nodes are limited to 20 minutes of CPU
time. Processes exceeding this limit are automatically terminated
without warning. If you require more than 20 minutes of CPU time, use
qsub command to submit a batch job
(see Submitting jobs below).
Implications of these limits on the login nodes are as follows:
- The Java Virtual Machine must be invoked with a maximum heap
size. Because of the way Java allocates memory, under
ulimitconditions an error will occur if Java is called without the
- Memory-intensive jobs started on the login nodes will be killed
almost immediately. Debugging and testing on Mason should be done by
submitting a request for an interactive job via the batch system, for
qsub -I -q shared -l nodes=1:ppn=4,vmem=10gb,walltime=4:00:00
The interactive session will start as soon as the requested resources are available.
To submit a job to run on Mason, use the TORQUE
command. If the command exits successfully, it will return a job ID,
If you need attribute values different from the defaults, but less
than the maximum allowed, specify these either in the job script using
TORQUE directives, or on the command line with the
switch. For example, to submit a job that needs more than the default
60 minutes of walltime, use:
Jobs on Mason default to a per-job virtual memory resource of 8 MB. So, for example, to submit a job that needs 100 GB of virtual memory, use:qsub -l nodes=1:ppn=4,vmem=100gb job.script
Note: Command-line arguments override directives
in the job script, and you may specify many attributes on the command
line, either as comma-separated options following the
switch, or each with its own
-l switch. The following two
commands are equivalent:
qsub options include:
||Execute the job only after specified date and time.|
||Run the job interactively. (Interactive jobs are forced to not re-runnable.)|
||Mail a job summary report when the job terminates.|
||Specify the destination queue for the job. (Not applicable on Mason.)|
||Declare whether the job is re-runnable. Use the argument
||Export all environment variables in your current environment to the job.|
For more, see the
qsub manual page.
To monitor the status of a queued or running job, use the TORQUE
qstat command. Useful
||Display all jobs.|
||Write a full status display to standard output.|
||List the nodes allocated to a job.|
||Display jobs that are running.|
||Display jobs owned by specified users.|
For more, see the
qstat manual page.
To delete a queued or running job, use the
Occasionally, a node will become unresponsive and unable to respond
to the TORQUE server's requests to kill a job. In such cases, try
qdel -W <delay> to override the delay between
SIGTERM and SIGKILL signals (for
<delay>, specify a
value in seconds).
For more, see the
qdel manual page.
For IU and NCGAS users
Support for IU and NCGAS users is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups, and by the National Center for Genome Analysis Support (NCGAS):
- If you have system-specific questions about Mason, email the HPS
- If you have questions about compilers, programming,
scientific/numerical libraries, or debuggers on Mason, email the SciAPT
- If you need help installing software packages in your home directory on Mason, email NCGAS.
For XSEDE users
For more about XSEDE compute, advanced visualization, storage, and special purpose systems, see the Resources Overview, Systems Monitor, and User Guides. For scheduled maintenance windows, outages, and other announcements related to XSEDE digital services, see User News.
This document was developed with support from National Science Foundation (NSF) grant OCI-1053575. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.