Connectivity

Contributions

Traditional multivariate fMRI techniques focus on the information present in patterns of activity in localized regions (ROIs or searchlights). Sometimes, the relevant information may be represented across a network of brain regions and thus would not be identified via ROI analysis or searchlights. Functional connectivity measures help examine information at a global level, in regions that are far apart, focusing on network interaction rather than spatial localization. When performing connectivity analyses, BOLD timeseries are compared across regions (usually with correlation) and the strength of the relationship determines their functional connectivity. By including or excluding stimulus/task variables, we can study the modulation of connectivity by different cognitive states.

We are going to cover connectivity analysis over the next two notebooks, starting in this notebook with more basic ROI-level, task-based and background connectivity. Notebook 09-fcma will cover whole-brain connectivity analyses.

Goal of this script

Learn how to run seed-based connectivity analyses.
Learn to use an atlas to get parcels and define seeds.
Investigate how attention modulates connectivity.

Pre-requisites: An understanding of correlation matrices, and data loading techniques outlined in previous notebooks.

Table of Contents

1. Data loading

1.1 Create stimulus labels and time-shift
1.2 Examine the header file
1.3 Plot the stimulus labels
1.4 Mask and extract the whole-brain data

2. Create a seed

2.1 Create a spherical ROI
2.2 Plot the bold signal for the mask

3. Compute the correlation matrix

3.1 Plot the seed correlations

4. Creating a seed from an atlas

4.1 Compute connectivity across parcels

5. Background connectivity

6. Group analyses

Exercises:

1 2 3 4 5 6 7 8 9

Novel contribution

In [1]:
import warnings
import sys 
if not sys.warnoptions:
    warnings.simplefilter("ignore")

import numpy as np
import os 
import nibabel as nib
from nilearn.input_data import NiftiMasker, NiftiLabelsMasker
from nilearn import plotting
from nilearn import datasets
from nilearn.connectome import ConnectivityMeasure
from scipy import stats
from scipy.ndimage.measurements import center_of_mass
import matplotlib.pyplot as plt
import seaborn as sns 
import pandas as pd
import brainiak.utils.fmrisim as sim
from brainiak.fcma.util import compute_correlation
from nilearn import input_data
import time
from utils import shift_timing

%autosave 5
%matplotlib inline
sns.set(style = 'white', context='poster', rc={"lines.linewidth": 2.5})
sns.set(palette="colorblind")
Autosaving every 5 seconds

Dataset: The dataset we will be using was collected as part of a recently published study (Hutchinson et al., 2016). A paper reporting the application of connectivity analyses to this dataset is being prepared for journal submission. Below find the README file describing the dataset. This dataset has been preprocessed with motion correction and linear detrending. Subjects were asked to attend to a scene on the right or on the left in each block.

Attention and connectivity

Our brain is constantly bombarded by sensory information from the world. Attention refers to the set of cognitive processes that filter this input based on what is salient (e.g., a police siren) and what is relevant (e.g., faces when looking for a friend). Attention biases our behavior and conscious awareness towards these properties of the world and reduces distraction by other properties. At the neural level, attention is controlled by parietal and frontal cortices (Corbetta and Shulman, 2002), which modulate processing in sensory systems, enhancing attended information and suppressing unattended information (Noudoost et al., 2010). Thus, to study the impact of attention on perceptual processing, we need to examine not just a localized brain region but how these regions interact with each other (Saalman et al., 2007; Gregoriou et al., 2009; Al-Aidroos et al., 2012). In this case, traditional MVPA techniques that examine patterns of activity in local brain regions will not be sufficient and a more global analysis will be helpful. The covarying response of two or more brain regions becomes critical in this case, and thus functional connectivity measures are important in studying these phenomena.

1. Data loading

Load the preprocessed data for subject 1.

In [2]:
from utils import latatt_dir
assert os.path.exists(latatt_dir)

dir_time = os.path.join(latatt_dir, 'onsets/fsl')
dir_motion = os.path.join(latatt_dir, 'processed_data/motionnuisance/')
dir_motion_background = os.path.join(latatt_dir, 'processed_data/background/motionnuisance/')

sub = 'sub01'
num_runs = 1
TR = 1.5
scan_duration = 540

# Shift the data a certain amount
shift_size = 2

Self-Study: Explore the data

Exercise 1: Carefully read the README file in latatt_dir. What are two differences between bg_image and raw_hires?

A:

1.1 Create stimulus labels and time-shift

From the stimulus timing files, we need to create labels for each TR in the BOLD data. We did this exercise for the VDC dataset in notebook 02-data-handling. You can repeat those steps to create labels for every TR.

As we have FSL onset files for this dataset, there is an alternative way to create labels for every TR using fmrisim in BrainIAK. This involves calling generate_stimfunction and then shift_timing, which we defined in utils.

Self-study: What does generate_stimfunction do?

To time-shift or not? In traditional MVPA analysis, we timeshift the BOLD signal to account for the hemodynamic lag. In functional connectivity analysis, time-shifting is sometimes not done to ensure that the voxels being compared are similar before, during, and after the stimulus has been shown. If you do not want to time-shift your data then set the `shift_size = 0`. Note: In this notebook, we are setting `shift_size=2`.
In [3]:
# Use the utilities from the simulator to create an event time course based on an FSL onset file

right_stimfunction = sim.generate_stimfunction(
    onsets='', 
    event_durations='', 
    total_time=scan_duration,
    temporal_resolution=1/TR, 
    timing_file=(dir_time + '/%s/right.txt' % (sub))
)

left_stimfunction  = sim.generate_stimfunction(
    onsets='', 
    event_durations='', 
    total_time=scan_duration,
    temporal_resolution=1/TR, 
    timing_file=(dir_time + '/%s/left.txt' % (sub))
)


# Shift the timecourses to account for the hemodynamic lag
right_stim_lag = shift_timing(right_stimfunction, shift_size)
left_stim_lag = shift_timing(left_stimfunction, shift_size)

1.2 Examine the header file

We have been loading nifti files in all previous notebooks. There is useful information in the header that you should examine to learn about the data organization and shape. You can find the voxel size (mm) and the transform space of the data set. More information can be found here.

In [4]:
# Get the nifti object
nii = nib.load(dir_motion + '%s.nii.gz' % (sub))
hdr=nii.get_header()
print(hdr)
print('Voxel size in %s, Time in %s' %hdr.get_xyzt_units())
<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
sizeof_hdr      : 348
data_type       : b''
db_name         : b''
extents         : 0
session_error   : 0
regular         : b'r'
dim_info        : 0
dim             : [  4  45  54  45 360   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [-1.  4.  4.  4.  1.  0.  0.  0.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 1470.0978
cal_min         : -1458.5084
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b'FSL5.0'
aux_file        : b''
qform_code      : mni
sform_code      : mni
quatern_b       : 0.0
quatern_c       : 1.0
quatern_d       : 0.0
qoffset_x       : 90.0
qoffset_y       : -126.0
qoffset_z       : -72.0
srow_x          : [-4.  0.  0. 90.]
srow_y          : [   0.    4.    0. -126.]
srow_z          : [  0.   0.   4. -72.]
intent_name     : b''
magic           : b'n+1'
Voxel size in mm, Time in sec

1.3 Plot the stimulus labels

In this experiment there are two conditions: attend left and attend right, each one condition has a separate timing file

In [5]:
# Plot the stim time course. 
plt.figure(figsize=(14, 4))
plt.plot(right_stim_lag)
plt.plot(left_stim_lag)
plt.yticks([0,1])
plt.xlabel('Timepoints')

plt.legend(('Attend Right', 'Attend Left'), loc='upper right')
plt.ylim(0, 1.5)
sns.despine()

1.4. Mask and extract the whole-brain data

In previous notebooks, we extracted brain data with a mask. The mask was either an ROI or the whole brain. Both of these masks were already created for you. Here we show you a different way, using nilearn, to create a mask from a dataset and then extract the data from the mask. This function can z-score the data as well. More information can be found here. We then plot the time course of a few voxels.

In [6]:
# Init a masker object that also standardizes the data
masker_wb = input_data.NiftiMasker(
    standardize=True,  # Are you going to zscore the data across time?
    t_r=1.5, 
    memory='nilearn_cache',  # Caches the mask in the directory given as a string here so that it is easier to load and retrieve
    memory_level=1,  # How much memory will you cache?
    verbose=0
)

# Pull out the time course for voxel
bold_wb = masker_wb.fit_transform(nii)
bold_wb_r = bold_wb[(right_stim_lag==1),:]
bold_wb_l = bold_wb[(left_stim_lag==1),:]

print('shape - whole brain bold time series: ', np.shape(bold_wb))
print('shape - whole brain bold time series, attend left : ', np.shape(bold_wb_l))
print('shape - whole brain bold time series, attend right : ', np.shape(bold_wb_r))
/usr/people/mk35/.conda/envs/mybrainiak/lib/python3.6/site-packages/nilearn/_utils/cache_mixin.py:129: UserWarning: Incompatible cache in nilearn_cache/joblib: different version of nibabel. Deleting the cache. Put nilearn.CHECK_CACHE_VERSION to false to avoid this behavior.
  % cachedir)
shape - whole brain bold time series:  (360, 43642)
shape - whole brain bold time series, attend left :  (108, 43642)
shape - whole brain bold time series, attend right :  (108, 43642)
In [7]:
"""
Plot the timeseries for a few voxels
"""
voxel_ids = [0,10,100]

plt.figure(figsize=(14, 4))
plt.title('Voxel activity for rightward trials, voxel ids = ' + str(voxel_ids));
plt.plot(bold_wb_r[:, voxel_ids]);
plt.ylabel('Evoked activity');
plt.xlabel('Timepoints');
sns.despine()

2. Create a seed

At this stage, we have loaded the whole-brain data for the attend-left and attend-right conditions. To examine the effects of attention, we are going to create seed ROIs and correlate their activity with other voxels in the brain. For any voxels that are correlated with a seed ROI, we can infer that they are functionally connected.

2.1 Create a spherical ROI

Let's create an ROI to define the parahippocampal place area (PPA), a scene selective region, which is likely important since scene stimuli were presented. The data are currently in MNI space, a standardized anatomical space that allows us to compare individual anatomy to a common, averaged space and use coordinates in MNI space to identify regions (approximately) in individual participants.

Exercise 2: Use this article to determine the center co-ordinates for the left and right PPA (use the one reported in the 'Whole-brain analyses' section): Park, S., & Chun, M. M. (2009). Different roles of the parahippocampal place area (PPA) and retrosplenial cortex (RSC) in panoramic scene perception. NeuroImage, 47(4), 1747–1756. https://doi.org/10.1016/j.neuroimage.2009.04.058. Note: they use asymmetric ROI coordinates for the left and right PPA.

In [8]:
# Specify the center of the left and right ROIs

Nilearn has some powerful tools for drawing ROIs. These functions allow you to flexibly identify ROIs of any shape and have multiple parameters that allow for smoothing, detrending, filtering and standardization. However, it is easy to get things wrong with these functions so use these parameters cautiously. Most research labs have well-defined processing pipelines with these parameters set for multiple studies to avoid too much variation across studies. We will play it safe and use the most basic sphere ROI function from nilearn.

2.2 Plot the bold signal for the mask

The average bold signal for all the voxels in the mask is computed and plotted.

In [9]:
# Init the masking object
masker_lPPA = input_data.NiftiSpheresMasker(
    coords_lPPA, 
    radius=8, standardize=True, t_r=2.,
    memory='nilearn_cache', memory_level=1, verbose=0
)

# Mask the epi data and get a time series for the ROI
bold_lPPA = masker_lPPA.fit_transform(nii)

# Plot the data from the seed region for both attention condition
bold_lPPA_r = bold_lPPA[right_stim_lag == 1, :]
bold_lPPA_l = bold_lPPA[left_stim_lag == 1, :]
In [10]:
plt.figure(figsize=(14, 4))
plt.title('Left PPA ROI activity for right and left attention conditions')
plt.plot(bold_lPPA_r);
plt.plot(bold_lPPA_l);
plt.legend(('Attend Right', 'Attend Left'));
plt.ylabel('Evoked activity');
plt.xlabel('Timepoints');
sns.despine()

3. Compute the correlation matrix

We previously encountered correlation matrices in the exercise on pattern similarity. Each cell in those matrices corresponded to the spatial correlation of the pattern of activity for a pair of stimuli or tasks. We will again be using correlation matrices, but now for assessing functional connectivity. Each cell in these matrices reflects the correlation of the BOLD timeseries between a pair of voxels or regions, and matrices can be calculated separately for each condition or even each trial.

Below we go through a slow loop-based way of calculating the correlation of every voxel in the brain with the PPA seed region. We will revisit optimized ways of calculating correlations in the next exercise.

In [11]:
# Correlate seed with every brain voxel. Loop through and extract data for every voxel.
start_time = time.time()
num_voxels = bold_wb_r.shape[1]
all_corr = np.zeros((num_voxels, 1))
for v in range(num_voxels): 
    all_corr[v, 0] = np.corrcoef(bold_lPPA_r.flatten(), bold_wb_r[:, v])[0, 1]

end_time = time.time()
print('Analysis duration for %d voxels: %0.2fs' % (num_voxels, (end_time - start_time)))
Analysis duration for 43642 voxels: 2.41s

We are going to be calculating seed-based correlations throughout this notebook, so let's make a function. It is also common to transform the correlations to a Fisher-Z score, as the bounded nature of Pearson correlation violates certain statistical assumptions. This generally does not have a big effect on the results.

In [12]:
def seed_correlation(wbBold, seedBold):
    """Compute the correlation between a seed voxel vs. other voxels 
    Parameters
    ----------
    wbBold [2d array], n_stimuli x n_voxels 
    seedBold, 2d array, n_stimuli x 1

    Return
    ----------    
    seed_corr [2d array], n_stimuli x 1
    seed_corr_fishZ [2d array], n_stimuli x 1
    """
    num_voxels = wbBold.shape[1]
    seed_corr = np.zeros((num_voxels, 1))
    for v in range(num_voxels):    
        seed_corr[v, 0] = np.corrcoef(seedBold.flatten(), wbBold[:, v])[0, 1]
    # Transfrom the correlation values to Fisher z-scores    
    seed_corr_fishZ = np.arctanh(seed_corr)
    return seed_corr, seed_corr_fishZ
In [13]:
# Let's use the function and print out the range of results
corr_lPPA_r, corr_fz_lPPA_r = seed_correlation(bold_wb_r, bold_lPPA_r)

print("Seed-based correlation Fisher-z transformed: min = %.3f; max = %.3f" % (
    corr_fz_lPPA_r.min(), corr_fz_lPPA_r.max()
))


# A histogram is always a useful first way of looking at your data.
plt.hist(corr_fz_lPPA_r)
plt.ylabel('Frequency');
plt.xlabel('Fisher-z score');
sns.despine()
Seed-based correlation Fisher-z transformed: min = -0.561; max = 1.875
In [14]:
# We can tranform the correlation array back to a Nifti image object that we can save
img_corr_lPPA_r= masker_wb.inverse_transform(corr_fz_lPPA_r.T)
# img_corr_lPPA_r.to_filename('seed_rtstim.nii.gz')

3.1 Plot the seed correlations

We plot the seed correlation with every other voxel. For better visualization, we set a threshold, showing only voxels above the threshold. Typically, thresholds are chosen based on statistical significance. We have chosen the threshold arbitrarily below.

In [15]:
# Let's also visualize the correlation of the seed with every voxel
threshold = .8

# Nilearn has useful tools for plotting our results as a map
r_map_ar = plotting.plot_stat_map(
    img_corr_lPPA_r, 
    threshold=threshold,
    cut_coords=coords_lPPA[0],
)
# Add the seed
r_map_ar.add_markers(
    marker_coords=coords_lPPA, 
    marker_color='g',
    marker_size=50
)

Here is a different way to plot the same information

In [16]:
# Create a glass brain
plotting.plot_glass_brain(
    img_corr_lPPA_r, 
    threshold=threshold,
    colorbar=True, 
    plot_abs=False,
    display_mode='lyrz', 
);

Exercise 3: Compute and plot the correlation with the left PPA when participants are attending to the scene in the left visual field.

Note: The early stages of visual processing in the brain occur in the contralateral side. If you are viewing something in your left visual field, your right visual brain areas wil show greater activity. See if you notice something similar in your results from this exercise.

In [17]:
# Insert code here

Exercise 4: Create a spherical ROI corresponding to the right PPA (using the coordinates you created earlier). Compute correlations across the whole-brain in both the 'attend left' and 'attend right' conditions and plot your results.

In [18]:
# Insert code here

4. Creating a seed from an atlas

In addition to creating our own seed ROIs, we can use available atlases to extract ROIs. Nilearn provides an easy way to accomplish this. To summarize up front:

  1. Use Nilearn to import an atlas parcellation. This will download and save in your home directory by default.
  2. Explore the atlas: plot the different parcels and get their labels.
  3. Choose one parcel as the seed and extract the average BOLD signal over time from voxels in the parcel.
  4. Perform correlation with remaining parcels or voxels.
In [19]:
atlas = datasets.fetch_atlas_harvard_oxford('cort-maxprob-thr25-2mm')
atlas_filename = atlas.maps

# This is where the atlas is saved.
print("Atlas path: " + atlas_filename + "\n\n")

# Plot the ROIs
plotting.plot_roi(atlas_filename);
print('Harvard-Oxford cortical atlas')
Atlas path: /usr/people/mk35/nilearn_data/fsl/data/atlases/HarvardOxford/HarvardOxford-cort-maxprob-thr25-2mm.nii.gz


Harvard-Oxford cortical atlas
In [20]:
# Print the labels
# Label 0 (Background) refers to the brain image, not background connectivity

# Create a Pandas dataframe of the atlas data for easy inspection.
atlas_pd = pd.DataFrame(atlas)
print(atlas_pd['labels'])
0                                            Background
1                                          Frontal Pole
2                                        Insular Cortex
3                                Superior Frontal Gyrus
4                                  Middle Frontal Gyrus
5             Inferior Frontal Gyrus, pars triangularis
6              Inferior Frontal Gyrus, pars opercularis
7                                      Precentral Gyrus
8                                         Temporal Pole
9            Superior Temporal Gyrus, anterior division
10          Superior Temporal Gyrus, posterior division
11             Middle Temporal Gyrus, anterior division
12            Middle Temporal Gyrus, posterior division
13         Middle Temporal Gyrus, temporooccipital part
14           Inferior Temporal Gyrus, anterior division
15          Inferior Temporal Gyrus, posterior division
16       Inferior Temporal Gyrus, temporooccipital part
17                                    Postcentral Gyrus
18                             Superior Parietal Lobule
19               Supramarginal Gyrus, anterior division
20              Supramarginal Gyrus, posterior division
21                                        Angular Gyrus
22          Lateral Occipital Cortex, superior division
23          Lateral Occipital Cortex, inferior division
24                                Intracalcarine Cortex
25                                Frontal Medial Cortex
26    Juxtapositional Lobule Cortex (formerly Supple...
27                                   Subcallosal Cortex
28                                  Paracingulate Gyrus
29                   Cingulate Gyrus, anterior division
30                  Cingulate Gyrus, posterior division
31                                    Precuneous Cortex
32                                        Cuneal Cortex
33                               Frontal Orbital Cortex
34             Parahippocampal Gyrus, anterior division
35            Parahippocampal Gyrus, posterior division
36                                        Lingual Gyrus
37          Temporal Fusiform Cortex, anterior division
38         Temporal Fusiform Cortex, posterior division
39                   Temporal Occipital Fusiform Cortex
40                             Occipital Fusiform Gyrus
41                             Frontal Operculum Cortex
42                             Central Opercular Cortex
43                            Parietal Operculum Cortex
44                                        Planum Polare
45                  Heschl's Gyrus (includes H1 and H2)
46                                     Planum Temporale
47                                Supracalcarine Cortex
48                                       Occipital Pole
Name: labels, dtype: object
In [21]:
# Create a masker object that we can use to select ROIs
masker_ho = NiftiLabelsMasker(labels_img=atlas_filename)
print(masker_ho.get_params())

# Apply our atlas to the Nifti object so we can pull out data from single parcels/ROIs
bold_ho = masker_ho.fit_transform(nii)
print('shape: parcellated bold time courses: ', np.shape(bold_ho))
{'background_label': 0, 'detrend': False, 'dtype': None, 'high_pass': None, 'labels_img': '/usr/people/mk35/nilearn_data/fsl/data/atlases/HarvardOxford/HarvardOxford-cort-maxprob-thr25-2mm.nii.gz', 'low_pass': None, 'mask_img': None, 'memory': Memory(location=None), 'memory_level': 1, 'resampling_target': 'data', 'smoothing_fwhm': None, 'standardize': False, 't_r': None, 'verbose': 0}
shape: parcellated bold time courses:  (360, 48)

In previous analyses we calculated the average activity in a single seed region across time. However, Nilearn has tools to easily calculate the timecourse of activity across all of the ROIs that are supplied to the masker object.

In [22]:
# Get data for rightward attention only
bold_ho_r = bold_ho[(right_stim_lag==1),:]

# What does our data structure look like?
print("Parcellated data shape (time points x num ROIs)")
print("All time points  ", bold_ho.shape)
print("Rightward attention trials: ", bold_ho_r.shape)

# Pull out a single ROI corresponding to the posterior parahippocampal cortex
# Note that Label #35 is the Parahippocampal Gyrus, posterior division. 
roi_id = 34
bold_ho_pPHG_r = np.array(bold_ho_r[:, roi_id])
bold_ho_pPHG_r = bold_ho_pPHG_r.reshape(bold_ho_pPHG_r.shape[0],-1)
print("Posterior PPC (region 35) rightward attention trials: ", bold_ho_pPHG_r.shape)

plt.figure(figsize=(14,4))
plt.plot(bold_ho_pPHG_r)
plt.ylabel('Evoked activity');
plt.xlabel('Timepoints');
sns.despine()
Parcellated data shape (time points x num ROIs)
All time points   (360, 48)
Rightward attention trials:  (108, 48)
Posterior PPC (region 35) rightward attention trials:  (108, 1)
In [23]:
# Like before we want to correlate the whole brain time course with the seed we have pulled out

corr_pPHG_r, corr_fz_pPHG_r = seed_correlation(
    bold_wb_r, bold_ho_pPHG_r
) 

# Print the range of correlations.
print("PHG correlation Fisher-z transformed: min = %.3f; max = %.3f" % (
    corr_fz_pPHG_r.min(), corr_fz_pPHG_r.max())
)

# Plot a histogram
plt.hist(corr_fz_pPHG_r)
plt.ylabel('Frequency');
plt.xlabel('Fisher-z score');
PHG correlation Fisher-z transformed: min = -0.656; max = 1.088
In [24]:
# Map back to the whole brain image
img_corr_pPHG_r = masker_wb.inverse_transform(
    corr_fz_pPHG_r.T
)

threshold = .8 

# Find the cut coordinates of this ROI, using parcellation.
# This function takes the atlas path and the hemisphere and outputs all centers of the ROIs
roi_coords = plotting.find_parcellation_cut_coords(atlas_filename,label_hemisphere='left')

# Pull out the coordinate for this ROI
roi_coord = roi_coords[roi_id,:]

# Plot the correlation as a map on a standard brain. 
# For comparison, we also plot the position of the sphere we created ealier.
h2 = plotting.plot_stat_map(
    img_corr_pPHG_r, 
    threshold=threshold,
    cut_coords=roi_coord,
)

# Create a glass brain
plotting.plot_glass_brain(
    img_corr_pPHG_r, 
    threshold=threshold,
    colorbar=True, 
    display_mode='lyrz', 
    plot_abs=False
)
Out[24]:
<nilearn.plotting.displays.LYRZProjector at 0x7f761c527ac8>

4.1 Compute connectivity across parcels

In addition to one ROI, we can compute correlations across multiple brain regions. Nilearn has a function to do this quite easily. This will be useful when we want to study attention in different brain regions.

We will also plot the connectivity matrices using a few different ways.

In [25]:
# Set up the connectivity object
correlation_measure = ConnectivityMeasure(kind='correlation')

# Calculate the correlation of each parcel with every other parcel

corr_mat_ho_r = correlation_measure.fit_transform([bold_ho_r])[0]

# Remove the diagonal for visualization (guaranteed to be 1.0)
np.fill_diagonal(corr_mat_ho_r, np.nan)

# Plot the correlation matrix
# The labels of the Harvard-Oxford Cortical Atlas that we are using 
# start with the background (0), hence we skip the first label
fig = plt.figure(figsize=(11,10))
plt.imshow(corr_mat_ho_r, interpolation='None', cmap='RdYlBu_r')
plt.yticks(range(len(atlas.labels)), atlas.labels[1:]);
plt.xticks(range(len(atlas.labels)), atlas.labels[1:], rotation=90);
plt.title('Parcellation correlation matrix')
plt.colorbar();
In [26]:
# Alternatively, we could use Nilearn's own plotting function
plotting.plot_matrix(
    corr_mat_ho_r, 
    cmap='RdYlBu_r', 
    figure=(11, 10), 
    labels=atlas.labels[1:], 
)
Out[26]:
<matplotlib.image.AxesImage at 0x7f76143224a8>

Exercise 5: We mentioned earlier that parietal cortex modulates sensory processing. You are now ready to examine this. Use a parcel from the superior parietal lobe as a seed and compute the voxelwise correlation across the brain for the attend left and attend right conditions.

When plotting the data, use plotting.find_parcellation_cut_coords to find the to center each ROI. Then use these coordinates to specify the cut_coords that you want to center your code on.

Also note, that roi_id does not align with the ROI names in the panda data frame print above. Specifically, each roi_id is 1 less than the number reported in this table. That is because an ROI is not made for the background.

In [27]:
# Insert code here

5. Background connectivity

There is a potential problem in analyzing functional connectivity during tasks. Consider brain regions A and B. Let's assume that our stimuli activate both regions. If we were to examine correlations between the regions, they would have strong connectivity, but not because they are necessarily communicating or interacting in any way. Rather, they share the stimulus as a third variable. One solution is to regress out the stimulus-evoked responses from our signal and re-examine the correlations between regions. If region A and B are still correlated, we are on more solid footing that they are functionally connected during the task. Insofar as this "background connectivity" differs between task conditions (e.g., attend left vs. right), we can conclude that the task is modulating the scaffold of noise correlations in the brain. To learn more about background connectivity, see this review.

In background connectivity analysis, stimulus-driven activation is not the desired effect of interest, but potentially a confound. Thus, now we need to remove "stimulus confounds" before continuing. Lucky for us, the dataset in the directory ../processed_data/background/ has already had the evoked activity and other nuisance variables regressed out. We'll repeat the previous analyses for the left PPA on these data.

In [28]:
# Load in the data
sub = 'sub01'
epi_in_mcg = (dir_motion_background + '%s.nii.gz' % (sub))

# Get the seed data
bold_lPPA_mcg = masker_lPPA.fit_transform(epi_in_mcg)
bold_lPPA_r_mcg = bold_lPPA_mcg[right_stim_lag==1,:]
bold_lPPA_l_mcg = bold_lPPA_mcg[left_stim_lag==1,:]

# Get the whole brain data
boldWB_mcg = masker_wb.fit_transform(epi_in_mcg)
boldWB_r_mcg = boldWB_mcg[right_stim_lag==1,:] 
boldWB_l_mcg = boldWB_mcg[left_stim_lag==1,:] 

# plot the data
plt.figure(figsize=(14,4))
plt.plot(bold_lPPA_r_mcg)
plt.plot(bold_lPPA_l_mcg)
plt.legend(('Attend Right', 'Attend Left'));
plt.ylabel('BOLD signal, standardized')
plt.xlabel('TRs of right attention blocks')
plt.title('Background activity in seed region')
sns.despine()
In [29]:
# Calculate the voxelwise seed-based correlation
corr_lPPA_r_mcg, corr_fz_lPPA_r_mcg  = seed_correlation(boldWB_r_mcg, bold_lPPA_r_mcg)
corr_lPPA_l_mcg, corr_fz_lPPA_l_mcg  = seed_correlation(boldWB_l_mcg, bold_lPPA_l_mcg)

# Make an image 
img_corr_fz_lPPA_r_mcg = masker_wb.inverse_transform(corr_fz_lPPA_r_mcg.T)
img_corr_fz_lPPA_l_mcg = masker_wb.inverse_transform(corr_fz_lPPA_l_mcg.T)

print('PHG correlation Fisher-z transformed')
print("Right: min = %.3f; max = %.3f" % (
    corr_fz_lPPA_r_mcg.min(), corr_fz_lPPA_r_mcg.max()))
print("Left: min = %.3f; max = %.3f" % (
    corr_fz_lPPA_l_mcg.min(), corr_fz_lPPA_l_mcg.max()))
PHG correlation Fisher-z transformed
Right: min = -0.823; max = 1.678
Left: min = -0.583; max = 1.425
In [30]:
# Plot the correlation of each voxel in the brain with the seed
threshold = .8
vmax = np.max(np.stack([corr_fz_lPPA_r_mcg, corr_fz_lPPA_l_mcg]))

f, axes = plt.subplots(2, 2, figsize = (16, 7))

# left
axes[0,0].set_title('Attend left, lPPA')
r_map = plotting.plot_stat_map(
    img_corr_fz_lPPA_l_mcg, 
    threshold=threshold, vmax=vmax, 
    cut_coords=coords_lPPA[0], 
    axes=axes[0,0]
)
r_map.add_markers(
    marker_coords=coords_lPPA, 
    marker_color='g',
    marker_size=50
)
plotting.plot_glass_brain(
    img_corr_fz_lPPA_l_mcg, 
    threshold=threshold, vmax=vmax, 
    colorbar=True, 
    display_mode='lyrz', 
    plot_abs=False, 
    axes=axes[1,0]
)

# right
axes[0,1].set_title('Attend right, lPPA')
r_map = plotting.plot_stat_map(
    img_corr_fz_lPPA_r_mcg, 
    threshold=threshold, vmax=vmax, 
    cut_coords=coords_lPPA[0], 
    axes=axes[0,1]
)
r_map.add_markers(
    marker_coords=coords_lPPA, 
    marker_color='g',
    marker_size=50
)
plotting.plot_glass_brain(
    img_corr_fz_lPPA_r_mcg, 
    threshold=threshold, vmax=vmax, 
    colorbar=True, 
    display_mode='lyrz', 
    plot_abs=False, 
    axes=axes[1,1]
)
Out[30]:
<nilearn.plotting.displays.LYRZProjector at 0x7f7614145358>

Exercise 6: Compare background connectivity (above) to the original left PPA spherical ROI result (from 3.1). Explain why this difference might exist.

A:

Make a correlation matrix with the background connectivity data for one condition.

In [31]:
# Parcellate the time course to get the background connectivity parcels
parcel_time_series_mcg = masker_ho.fit_transform(epi_in_mcg)
boldParcel_rightstim_mcg = parcel_time_series_mcg[right_stim_lag==1, :]

correlation_matrix_mcg = correlation_measure.fit_transform([boldParcel_rightstim_mcg])[0]

# Remove the diagonal for visualization (guaranteed to be 1.0)
np.fill_diagonal(correlation_matrix_mcg, np.nan)

# Plot the correlation matrix
fig=plt.figure(figsize=(11, 10))

# The labels we have start with the background (0), hence we skip the first label
plt.title('Background connnectivity correlation for parcellation')
plt.imshow(correlation_matrix_mcg, interpolation='None', cmap='RdYlBu_r')
plt.yticks(range(len(atlas.labels)), atlas.labels[1:]);
plt.xticks(range(len(atlas.labels)), atlas.labels[1:], rotation=90);
plt.colorbar()
Out[31]:
<matplotlib.colorbar.Colorbar at 0x7f76039c3ac8>

Exercise 7: Use a different atlas (use datasets.fetch_atlas_ to look through other available atlases) and recompute the background connectivity matrix for both attend left and attend right conditions. Use an atlas that distinguishes ROIs between the left and right hemispheres and rearrange the labels so that all the ROIs from a hemisphere are grouped. What structure do you notice in the correlation matrix if any and what does it mean?

A:

In [32]:
# Insert code here

6. Group analyses

Exercise 8: Calculate and store the two matrices computed in Exercise 7 (functional connectivity separately for each attention condition) for all 30 participants. Plot the average across participants for each condition.

Hint: We recommend that you make helper functions that encapsulate some of the steps from above. For instance make a script that gets the timing information for each participant, like we did in 1.1 and another function to take in an atlas, data and the stimulus labels to create a correlation matrix. Then loop through the participants and stack the matrices you create (since these are 2D matrices we want to stack in depth, which you can use np.dstack for).

In [33]:
# Insert code here

Exercise 9: Taking the data from Ex 8, calculate the difference between the left attention and right attention correlation matrix for each participant (if they are stacked matrices then you only need to subtract the 3d arrays). You should now have 30 difference matrices. This allows you to conduct a simple statistical test of how reliably left vs. right attention affects background connectivity between parcels in the sample. We will use a one-sample t-test for this purpose (stats.ttest_1samp), which takes in the difference matrix for each participant and the popmean. The popmean for this test is 0, representing the value we are trying to be different from.

Summarize in words which regions' connectivity (if any) distinguishes attention conditions.

In [34]:
# Insert code here

Plotting a connectome

Nilearn has some beautiful tools for plotting connectomes. plotting.plot_connectome takes in the node by node correlation matrix and a node by coordinate matrix and then creates a connectome. Thresholds can be used to only show strong connections.

In [35]:
# Load the atlas
atlas_nii = nib.load(atlas_filename)
atlas_data = atlas_nii.get_data()
labels = np.unique(atlas_data)

# Iterate through all of the ROIs
coords = []
for label_id in labels:
    
    # Skip the background
    if label_id == 0:
        continue
        
    # Pull out the ROI of within the mask    
    roi_mask = (atlas_data == label_id)
    
    # Create as a nifti object so it can be read by the cut coords algorithm
    nii = nib.Nifti1Image(roi_mask.astype('int16'), atlas_nii.affine)
    
    # Find the centre of mass of the connectome
    coords.append(plotting.find_xyz_cut_coords(nii))
    
# Plot the connectome
plotting.plot_connectome(correlation_matrix_mcg, coords, edge_threshold='95%')
Out[35]:
<nilearn.plotting.displays.OrthoProjector at 0x7f7613ad7da0>

Novel contribution: be creative and make one new discovery by adding an analysis, visualization, or optimization.

Contributions

B. Hutchinson provided data and provided initial code
M. Kumar, C. Ellis and N. Turk-Browne produced the initial notebook 3/15/18
Q. Lu add solution
K.A. Norman provided suggestions on the overall content and made edits to this notebook.
C. Ellis implemented updates from cmhn-s19