4.6 Best practices
Here we outline our best practices for using shared computational resources. These are meant to be living guidelines that will be adapted by our team as needed:
Sharing is caring! Common courtesy can go a long way. As much as possible, try to use only the resources you need.
Leverage the tools below to monitor how many cores and how much RAM you and others are currently using
In general on sequoia, feel free to run analyses that use up to 24 cores and 256GB of RAM. We will likely adaptively manage these specific numbers once we start using sequoia and getting a better understanding of how many resources we are using. And if you don’t need that much, please request less so that others can use the resources they need.
For larger analyses that require lots of cores or RAM, coordinate with others over the server slack channel (
#hpc-core-dination) to ensure that workflows are not disrupted and that everyone has reasonable access to computational resourcesGenerally, we recommend piloting your code using a small subset of your data and/or just a single core, either on your local computer or on one of our HPC servers. Then once you know it works and have a sense of how much memory it will use and how long it will take to execute, you can go ahead and run the full analysis on the server. And if it looks like the full analysis will require resources beyond the standard recommend 24 cores and 256GB, coordinate with the team on the Slack channel
#hpc-core-dination.