Shared response model

Authors: Javier Turek (javier.turek@intel.com), Samuel A. Nastase (sam.nastase@gmail.com), Hugo Richard (hugo.richard@ens-lyon.fr)

This notebook provides interactive examples of functional alignment using the shared response model (SRM; Chen et al., 2015). BrainIAK includes several variations on the SRM algorithm, but here we focus on the core probabilistic SRM implementation. The goal of the SRM is to capture shared responses across participants performing the same task in a way that accommodates individual variability in response topographies (Haxby et al., 2020). Given data that is synchronized in the temporal dimension across a group of subjects, SRM computes a low dimensional shared feature subspace common to all subjects. The method also constructs orthogonal weights to map between the shared subspace and each subject's idiosyncratic voxel space. This notebook accompanies the manuscript "BrainIAK: The Brain Imaging Analysis Kit" by Kumar and colleagues (2020).

The functional alignment (funcalign) module includes the following variations of SRM:

Annotated bibliography

  1. Chen, P. H. C., Chen, J., Yeshurun, Y., Hasson, U., Haxby, J., & Ramadge, P. J. (2015). A reduced-dimension fMRI shared response model. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, R. Garnett (Eds.), Advances in Neural Information Processing Systems, vol. 28 (pp. 460-468). link Introduces the SRM method of functional alignment with several performance benchmarks.

  2. Haxby, J. V., Guntupalli, J. S., Nastase, S. A., & Feilong, M. (2020). Hyperalignment: modeling shared information encoded in idiosyncratic cortical topographies. eLife, 9, e56601. link Recent review of hyperalignment and related functional alignment methods.

  3. Chen, J., Leong, Y. C., Honey, C. J., Yong, C. H., Norman, K. A., & Hasson, U. (2017). Shared memories reveal shared structure in neural activity across individuals. Nature Neuroscience, 20(1), 115-125. link SRM is used to discover the dimensionality of shared representations across subjects.

  4. Nastase, S. A., Liu, Y. F., Hillman, H., Norman, K. A., & Hasson, U. (2020). Leveraging shared connectivity to aggregate heterogeneous datasets into a common response space. NeuroImage, 217, 116865. link This paper demonstrates that applying SRM to functional connectivity data can yield a shared response space across disjoint datasets with different subjects and stimuli.

Table of contents

Example fMRI data and atlas

To work through the SRM functionality, we use an fMRI dataset collected while participants listened to a spoken story called "I Knew You Were Black" by Carol Daniel. These data are available as part of the publicly available Narratives collection (Nastase et al., 2019). Here, we download a pre-packaged subset of the data from Zenodo. These data have been preprocessed using fMRIPrep with confound regression in AFNI. We apply the SRM to a region of interest (ROI) comprising the "temporal parietal" network according to a cortical parcellation containing 400 parcels from Schaefer and colleagues (2018).

Once data is loaded, we divide the data into two halves for a two fold validation. We will use one half for training SRM and the other for testing its performance. Then, we normalize the data each half.

Estimating the SRM

Next, we train the SRM on the training data. We need to specify desired dimension of the shared feature space. Although we simply use 50 features, the optimal number of dimensions can be found using grid search with cross-validation. We also need to specify a number of iterations to ensure the SRM algorithm converges.

After training SRM, we obtain a shared response $S$ that contains the values of the features for each TR, and a set of weight matrices $W_i$ that project from the shared subspace to each subject's idiosyncratic voxel space. Let us check the orthogonal property of the weight matrix $W_i$ for a subject. We visualize $W_i^TW_i$, which should be the identity $I$ matrix with shape equal to the number of features we selected.

Between-subject time-segment classification

When we trained SRM above, we learned the weight matrices $W_i$ and the shared response $S$ for the training data. The weight matrices further allow us to convert new data to the shared feature space. We call the transform() function to transform test data for each subject into the shred space.

We evaluate the performance of the SRM using a between-subject time-segment classification (or "time-segment matching") analysis with leave-one-subject-out cross-validation (e.g. Haxby et al., 2011; Chen et al., 2015. The function receives the data from N subjects with a specified window size win_size for the time segments. A segment is the concatenation of win_size TRs. Then, using the averaged data from N-1 subjects it tries to match the segments from the left-out subject to the right position. The function returns the average accuracy across segments for each subject.

Let's compute time segment matching accuracy for the anatomically-aligned data:

Now, we compute it after transforming the subjects data with SRM:

Lastly, we plot the classification accuracies to compare methods.

Summary

The SRM allows us to find a reduced-dimension shared response spaces that resolves functional–topographical idiosyncrasies across subjects. We can use the resulting transformation matrices to project test data from any given subject into the shared space. The plot above shows the time segment matching accuracy for the training data, the test data without any transformation, and the test data when SRM is applied. The average performance without SRM is 11%, whereas with SRM is boosted to 40%. Projecting data into the shared space dramatically improves between-subject classification.

References