4.9 Best practices for sharing our computational resources

Here we outline our best practices for using shared computational resources. These are meant to be living guidelines that will be adapted by our team as needed:

  • In order to keep some of our computational resources easy to use interactively without queues or SLURM, we will need to coordinate and share. Sharing is caring! Common courtesy can go a long way. Hopefully we can largely self-manage this. If not, we will need to move computational resources onto queues and SLURM, which could introduce barriers to analysis and rapid prototyping.

  • Always run htop in the terminal to see how the many cores and how much RAM are currently being used by others

  • In general on sequoia, feel free to run analyses that use up to 20 cores and 150GB of RAM. We will likely adaptively manage these specific numbers once we start using sequoia and getting a better understanding of how many resources we are using.

  • For larger analyses that require more cores or RAM, coordinate with others over the server slack channel (#hpc-core-dination) to ensure that workflows are not disrupted and that everyone has reasonable access to computational resources

  • Generally, we recommend piloting your code using a small subset of your data and/or just a single core, either on your local computer or on one of our HPC servers. Then once you know it works and have a sense of how much memory it will use and how long it will take to execute, you can go ahead and run the full analysis on the server. And if it looks like the full analysis will require resources beyond the standard recommend 20 cores and 150GB, coordinate with the team on #hpc-core-dination.