Note
Go to the end to download the full example code.
Using spatial null models
This example demonstrates how to use spatial null models in
neuromaps.nulls
to test the correlation between two brain
annotations.
The brain—and most features derived from it—is spatially autocorrelated, and therefore when making comparisons between brain features we need to account for this spatial autocorrelation.
Enter: spatial null models.
Spatial null models need to be used whenever you’re comparing brain maps. In
order to demonstrate how use them in neuromaps
we need two
annotations to compare. We’ll use the first principal component of cognitive
terms from NeuroSynth (Yarkoni et al., 2011, Nat Methods) and the first
principal component of gene expression across the brain (from the Allen Human
Brain Atlas).
Note that we pass return_single=True to
neuromaps.datasets.fetch_annotation()
so that the returned data are
a list of filepaths rather than the default dictionary format. (This only
works since we know that there is only one annotation matching our query; a
dictionary will always be returned if multiple annotations match our query.)
from neuromaps import datasets
nsynth = datasets.fetch_annotation(source='neurosynth', return_single=True)
genepc = datasets.fetch_annotation(desc='genepc1', return_single=True)
print('Neurosynth: ', nsynth)
print('Gene PC1: ', genepc)
Downloading data from https://files.osf.io/v1/resources/4mw3a/providers/osfstorage/60c22953f3ce9401fa24e651 ...
...done. (2 seconds, 0 min)
[References] Please cite the following papers if you are using this data:
For {'source': 'neurosynth', 'desc': 'cogpc1', 'space': 'MNI152', 'res': '2mm'}:
[primary]:
Tal Yarkoni, Russell A Poldrack, Thomas E Nichols, David C Van Essen, and Tor D Wager. Large-scale automated synthesis of human functional neuroimaging data. Nature Methods, 8(8):665, 2011.
[secondary]:
Russell A Poldrack, Aniket Kittur, Donald Kalar, Eric Miller, Christian Seppa, Yolanda Gil, D Stott Parker, Fred W Sabb, and Robert M Bilder. The cognitive atlas: toward a knowledge foundation for cognitive neuroscience. Frontiers Neuroinform, 5:17, 2011.
[References] Please cite the following papers if you are using this data:
For {'source': 'abagen', 'desc': 'genepc1', 'space': 'fsaverage', 'den': '10k'}:
[primary]:
Michael J Hawrylycz, Ed S Lein, Angela L Guillozet-Bongaarts, Elaine H Shen, Lydia Ng, Jeremy A Miller, Louie N Van De Lagemaat, Kimberly A Smith, Amanda Ebbert, Zackery L Riley, and others. An anatomically comprehensive atlas of the adult human brain transcriptome. Nature, 489(7416):391, 2012.
Ross D Markello, Aurina Arnatkeviciute, Jean-Baptiste Poline, Ben D Fulcher, Alex Fornito, and Bratislav Misic. Standardizing workflows in imaging transcriptomics with the abagen toolbox. eLife, 10:e72129, 2021.
[secondary]:
Neurosynth: /home/runner/neuromaps-data/annotations/neurosynth/cogpc1/MNI152/source-neurosynth_desc-cogpc1_space-MNI152_res-2mm_feature.nii.gz
Gene PC1: ['/home/runner/neuromaps-data/annotations/abagen/genepc1/fsaverage/source-abagen_desc-genepc1_space-fsaverage_den-10k_hemi-L_feature.func.gii', '/home/runner/neuromaps-data/annotations/abagen/genepc1/fsaverage/source-abagen_desc-genepc1_space-fsaverage_den-10k_hemi-R_feature.func.gii']
These annotations are in different spaces so we first need to resample them to the same space. Here, we’ll choose to resample them to the ‘fsaverage’ surface with a ‘10k’ resolution (approx 10k vertices per hemisphere). Note that the genepc1 is already in this space so no resampling will be performed for those data. (We could alternatively specify ‘transform_to_trg’ for the resampling parameter and achieve the same outcome.)
The data returned will always be pre-loaded nibabel image instances:
Downloading data from https://files.osf.io/v1/resources/4mw3a/providers/osfstorage/60b684c03a6df1020ed525f6 ...
...done. (2 seconds, 0 min)
Extracting data from /home/runner/neuromaps-data/5bb728f5dc8506afb8459c2c4450bd86/regfusion.tar.gz..... done.
(<nibabel.gifti.gifti.GiftiImage object at 0x7f58bf22d5e0>, <nibabel.gifti.gifti.GiftiImage object at 0x7f58bf22dd30>) (<nibabel.gifti.gifti.GiftiImage object at 0x7f58bf22dfa0>, <nibabel.gifti.gifti.GiftiImage object at 0x7f58bf22dc10>)
Once the images are resampled we can easily correlate them:
from neuromaps import stats
corr = stats.compare_images(nsynth, genepc)
print(f'Correlation: r = {corr:.02f}')
Correlation: r = 0.34
What if we want to assess the statistical significance of this correlation?
In this case, we can use a null model from the neuromaps.nulls
module.
Here, we’ll employ the null model proposed in Alexander-Bloch et al., 2018, NeuroImage. We provide one of the maps we’re comparing, the space + density of the map, and the number of permutations we want to generate. The returned array will have two dimensions, where each row corresponds to a vertex and each column to a unique permutation.
(Note that we need to pass the loaded data from the provided map to the null
function so we use the neuromaps.images.load_data()
utility.)
from neuromaps import images, nulls
nsynth_data = images.load_data(nsynth)
rotated = nulls.alexander_bloch(nsynth_data, atlas='fsaverage', density='10k',
n_perm=100, seed=1234)
print(rotated.shape)
Downloading data from https://files.osf.io/v1/resources/4mw3a/providers/osfstorage/60b684ab9096b7021b63cf6b ...
...done. (1 seconds, 0 min)
Extracting data from /home/runner/neuromaps-data/e38b96d96273aa064c22296eda1e5688/fsaverage10k.tar.gz..... done.
(20484, 100)
We can supply the generated null array to the
neuromaps.stats.compare_images()
function and it will be used to
generate a non-parameteric p-value. The function assumes that the array
provided to the nulls parameter corresponds to the first dataset passed
to the function (i.e., nsynth).
Note that the correlation remains identical to that above but the p-value is now returned as well:
Correlation: r = 0.34, p = 0.1782
There are a number of different null functions that can be used to generate null maps; they have (nearly) identical function signatures, so refer to the API reference for more information.
Total running time of the script: (0 minutes 7.032 seconds)