Skip to main content

Parallel Post-Processing on Amarel

Warning: The following contains references to Unix command-line tools, MATLAB processing of fMRI data, and the Star Wars Prequel Movies. Viewer discretion is advised.
Also, this guide assumes you already have or know how to write MATLAB post-processing scripts, either with SPM or some other similar toolbox. What we’re covering is simply parallelizing that script to run on Amarel.


RUNNING SUBJECT: 1 of 24 ....... TIME REMAINING: 1:03:54

A horror story that John Carpenter and Wes Craven would be thrilled to write, but it’s our reality. Running MATLAB scripts on local machines can be fairly quick, and certainly has a low coefficient of static friction, but sometimes it’s just not up to snuff. And that’s not your fault as an analyst, it’s not SPM’s fault or MATLAB’s fault, or your computer’s fault. Sometimes our GLMs or analyses are just complicated. But what if we took a page out of Jedi Master Sifo Dyas’s book and had got some clones to do our work for us? But instead of battling Count Dooku’s Confederacy of Independent Systems, we’re battling long run times. And instead of Kaminoan cloning scientists, we have the Amarel High Performance Computing Cluster. Okay this metaphor is getting convoluted, let’s start over.

We often have a lot of subjects that need processing, and these processes are often (and ideally) identical, so we put them in a trusty old FOR loop. Five to 10 minutes per subject doesn’t seem too bad but when you have 60 subjects in a row, that’s an entire work day with a bogged-down computer. “So run it overnight,” you say to yourself. But what happens if there’s a power outage or an error during the 15th subject? All of those potential processing hours are as black and void as your soul feels when you see the red ERROR text in the MATLAB terminal. The best option is to run all of the subjects at the same time, which is where the cluster comes into play. Parallel processing can be done on local machines, but it’s limited and not exactly recommended. Amarel – named after Dr. Saul Amarel, one of the founders of the Computer Science department here at Rutgers – is a high performance computing cluster here at Rutgers that can not only process scripts faster than your laptop, but it can do dozens of them simultaneously using a job scheduling system known as SLURM. No, not the highly delicious addictive beverage from Futurama, the Simple Linux Utility for Resource Management.

The basic workflow is as follows:
1. Write your MATLAB script
1-1/2. Make sure it works
2. Write the SLURM configuration script
3. Submit to the cluster.

Nice and easy. Thanks for reading, remember to tip your wait staff.

Okay so there’s more to it than that, but that’s the general outline of steps, but we’ll get into all of the intermediate steps – like loading your project onto Amarel in the first place. For this purpose, I’m going to assume that your data has already been pre-processed, either on Flywheel, XNat, HCP, Amarel, or your friend’s brother’s shoddy bitcoin mining rig. Look Jared, it’s just not profitable anymore, let it go. I can get into pre-processing on Amarel some other time.


[Helpful Link: Amarel Users’ Guide]
Before I begin, most of what I’m going to describe below can be read at the above link. It may not be as succinct, but it’s probably clearer.

The real first step here is making sure you have access to the Amarel cluster. If you do not have an account, you can request one from OARC (Office of Advanced Research Computing). They usually will get back to you in a day or two. Once you have an account, you can access a login node via SSH in a terminal window (such as PuTTY, XTerm, WSL, or just a plain terminal) or through your browser at From the browser, you have the option of using the command line at “CLUSTERS->Amarel Cluster Shell Access” or creating a remote desktop session with “Interactive Apps”. NOTE: you must be connected to the Rutgers network either on campus or via VPN.


There are three methods of doing this – SFTP, rclone, and direct upload. Which one to use has a lot to do with how you have your data stored now.

If your study is on Flywheel, you could open up a remote desktop session and download it directly into a study folder. Alternatively, you could have it on your local computer and use SFTP. Direct upload at is technically an option, however it would take so long for most studies that I would not recommend it. Direct uploads are better suited for individual files like scripts or ROI masks.

If your study is on Box, first of all, I’m so sorry. Second, there is a very detailed description in the users guide about how to accomplish this. (Just CTRL+F the word “rclone”). One thing that I will add as a bit of advice is that you should append the following flags to any and all rclone commands: -P --transfers=16 --checkers=32. The -P reports progress, and the other two speed up the process a little bit.

If you do not have another long-term storage solution on the cluster, the best place to load your study would be in the scratch area. Note that each user has a limit of 1TB of storage in their scratch folder (/scratch//), so larger studies would need different accommodations.


This post is not going to get into the actual writing of post-processing scripts. I’m going to assume you either have one or know how to make one. What I will describe, however, is how to open a MATLAB session in Amarel. This needs to be done in a remote desktop session [LINK], either in the desktop session or you can open a MATLAB window directly from that link. If you open a desktop session, you must first open the terminal, type module load MATLAB and hit ENTER. This loads the most recent version of MATLAB that’s available, which as of this writing is R2023a. If you need or want a specific version, you would just need to specify it in your module load command. E.g. module load MATLAB/R2021a. After the module is loaded, you can just run matlab and it will open up normally.

When writing scripts, my personal recommendation is write it with the main FOR loop fully intact, and afterwards change it to run a single subject that’s set by the SLURM scheduler. What does this mean? I’ll give a short example.

Initially write your script like this:

%% test_script.m
subject_list = dir('/scratch/netid/my_study/derivatives/fmriprep/sub-*');
addpath('/projects/f_dz268_1/matlab/spm12') % if you don't have access, you can download SPM to your home directory
for si = 1:length(subject_list)

When you’re sure that everything within the FOR loop is correct for one subject and won’t error, change it to something like this:

for si = sub


But Wil, where does the sub variable come from? Well that will be set in your SLURM script. With your favorite text-editing program, open up a file called something like run_my_script.slurm (.sh will work too) write something like this:

#SBATCH --partition=main
#SBATCH --requeue
#SBATCH --job-name=my_script
#SBATCH --time=1:00:00 # one hour
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mem-per-cpu=16G # Run a minimum of 16G for matlab
#SBATCH --output=log/slurm.%A_%a.out #make sure this output directory exists or it will fail
#SBATCH --error=log/slurm.%A_%a.err

cd /path/to/your/script/



module load MATLAB
matlab -nodisplay -batch "$matlab_command"

And right there is our good friend, the sub variable. Once you save this file, you can run it from the Amarel terminal with sbatch --array=1- run_my_script.slurm and check progress with sacct -X --format="JobID,Elapsed,NCPUS,ReqCPUS,ReqMem,ReqNodes,State,ExitCode". You simply use sacct but setting the format makes the output cleaner, in my opinion (I have this saved as an alias so I don’t have to type it out every time).

And that’s pretty much it! Are there better ways to do this? Probably. But this works for me, your mileage may vary. Hopefully this has been helpful and you’ll be able to understand how I work, and maybe use this to make your own even-better way to run scripts on Amarel. If you do, I hope you’ll share with the class and cut me into a percentage of your profits.

– Wil Rohlhill