All jobs must be run through the batch schedulers on the cluster. If you need more information on using them, please take a look at the HCC Documentation.
Please do not run jobs directly on the logon or head nodes. It is reserved exclusively for login and interaction with the schedulers. Under no circumstances shall you run your jobs on the head node.
To login to any of the HCC clusters, you will need to use SSH. In addition, Duo two-factor authentication is required.
HCC asks to be notified of any presentation or publication that results in entirely or in part from use of HCC. If possible and practical, please send an electronic copy to David Swanson, Director of HCC. If a digital copy is not available, if possible please mail a hard copy of any publications to David Swanson, SHOR 118K, UNL 68588-0150.
Users are asked to include the following line as acknowledgement of HCC use:
All accounts on HCC machines are to be 'owned' by the research advisor or head of the group you signed up with. This does not allow for a research advisor to snoop through his/her groups' files, but does allow them to retrieve data for research purposes. Further, all files on HCC machines can be searched through by HCC admins for purposes of solving machine problems (ie. checking for breaking of policy or trying to find out why the machine is being bogged down) or for machine security.
Storage and Disk Space Use
By default every user account on an HCC machine belongs to a research group. The leader of this group is often a faculty member directing graduate research, but other arrangements are common. Each group is given access to 2 filesystems, /home and /work. HCC files for a given user ultimately belong to the research group, although the privacy of each user by default is protected with standard file permissions. No personal identifying data or other data of a private nature should be stored on HCC machines.
Quotas are enforced per research group on /home. /home is only to be used for files that are necessary for functioning on HCC machines, such as code, difficult to recreate input files, and so on. A best-effort backup is maintained for /home, but this is not an archiving service. The backup is only a protection against catastrophic loss. However, if a user accidentally deletes a file, in a best-case scenario HCC may be able to recover it, and will do so when possible. While exact amounts vary slightly per machine, a quota of 100 GB is common and currently enforced on HCC Clusters. Users may use the command line argument "diskusage" to check their current storage status.
/work should be used for running jobs, temporary storage of large output files, and so on. /work is not subject to a quota. However, if /work fills up, HCC staff will delete existing files as needed to maintain system functionality. Users should never store precious data on /work. Files will be deleted according to age (oldest first) and largest usage amount (per group). All users are asked to remove files from /work as soon as possible. Old files from /work will be removed on a regular basis. Whenever practical, users will be notified before files are removed.
Researchers who have storage needs beyond the default allotment may purchase storage to be added to HCC machines. For details concerning this, please contact David Swanson.
- Users are not allowed to gain access to worker nodes directly via ssh/rsh and work from there, unless special permission has been granted by HCC.
- All jobs that are to be run on the clusters must be submitted via the job schedulers.
- You may not run your programs in the background, or run large CPU/memory intensive programs on logon or head nodes. Your jobs will be terminated and your account will be locked.
- Users must abide by the Computer Use Policies at UNL.
Contact HCC Support at.