Ken Lett, CQLS
After over a year of planning and hard work, CQLS is completing the upgrade of its entire high-performance computing (HPC) infrastructure to create a consolidated campus-wide HPC and provide accessible research computing across OSU.
The old GENOME cluster is being upgraded and replaced by an improved infrastructure with new features and across-the-board updates. These upgrades include new operating systems, Rocky 9 or Ubuntu 22, many software package updates, and improved resource management.
The Wildwood cluster uses SLURM as its primary job queueing system, which provides priority queuing and GPU-aware resource management. Wildwood also has SGE job management, to support those who rely on the SGE work flow for their analysis. To support multiple job management tools, we have also developed a suite of new tools (hpcman) that allow users to submit jobs and manage their work on SLURM or SGE with a unified command set.
One of the other big improvements in Wildwood is that we can now use ONID authentication – users no longer have to maintain a separate CQLS password, they can log in using their ONID password.
Wildwood is also a federated cluster – Wildwood connects to COEAS computing resources, and will soon also connect to the COE HPC. A single log in will allow users to run jobs and manage their data on any of
these clusters, and shared storage provides a unified interface to research data.
The CQLS Wildwood HPC contains 9PB of data storage, ~6500 processing CPUs, and 80+ GPUs.
Labs and departments have been making the transition to Wildwood through the summer, but Wildwood also has an expanded ‘all.q’ – general access resources anyone can use, including GPUs and PowerPC architecture machine.
For more information about the new cluster and how to connect, see the documentation webpage, the account request form, or contact CQLS HPC support: cqls-support@cqls.oregonstate.edu