Hierarchical Topographic Factor Analysis

By Jeremy R. Manning (jeremy.r.manning@dartmouth.edu) and Paxton C. Fitzpatrick (Paxton.C.Fitzpatrick@dartmouth.edu)

Overview

In this demonstration, we'll be using the BrainIAK Python toolbox to apply Hierarchical Topographic Factor Analysis (HTFA) to an fMRI dataset.

The demo will comprise three main steps:

  1. Apply HTFA to the dataset to discover a basis set of network "nodes"
  2. Apply a dynamic correlation model to the HTFA fits to characterize the network dynamics
  3. Visualize the network dynamics in two ways:

Annotated bibliography

  1. Manning JR, Ranganath R, Norman KA, Blei DM (2014). Topographic Factor Analysis: a Bayesian model for inferring brain networks from neural data. PLoS One, 9(5): e94914. link Describes a single-subject model (TFA) for inferring brain network hubs and applies it to a semantic decoding dataset.

  2. Manning JR, Zhu X, Willke TL, Ranganath R, Stachenfeld K, Hasson U, Blei DM, Norman KA (2018). A probabilistic approach to discovering dynamic full-brain functional connectivity patterns. NeuroImage, 180: 243-252. link Describes a multi-subject (hierarchical) model (HTFA) for inferring shared brain network hubs and applies it to a story listening and movie viewing dataset.

  3. Owen LLW, Chang TH, Manning JR (2020). High-level cognition during story listening is reflected in high-order dynamic correlations in neural activity patterns. bioRxiv. link Describes a model for inferring network dynamics from timeseries data and applies it to HTFA fits to a story listening dataset.

Table of contents:

Getting started

The easiest way to run this notebook is to download and install Docker on your local machine, and then build the Docker image in this folder. That will install the necessary toolboxes and dependencies, and will also download the data you'll be analyzing. Follow the instructions for your platform to download and install Docker, and then start the Docker Desktop application.

After you've installed Docker, to build the docker image, just navigate to this folder and run:

docker build --rm --force-rm -t htfa .

To start the image for the first time, run:

docker run -it -p 8888:8888 --name htfa -v $PWD:/mnt htfa

and on subsequent times, run:

docker start htfa && docker attach htfa

When the docker image is started, it will automatically start a Jupyter notebook server. Copy and paste the third link into a browser to interact with this notebook.

To stop running the container, run:

docker stop htfa

Code

Run the cells below (in sequence) to load in the example dataset, fit HTFA to the data, and visualize the resulting network dynamics.

Initialization

Import libraries and helper functions and load the dataset. The dataset we'll be analyzing is a subset of the story listening dataset collected by Simony et al. (2016).

Fit HTFA to data

First we need to convert the dataset into CMU format. Consistent with CMU format, nilearn expects the data matrices to have number-of-timepoints rows and number-of-voxels columns. BrainIAK expects the data in the transpose of that format-- number-of-voxels by number-of-timepoints matrices. We can easily convert between the two formats as shown below.

After wrangling the data, we'll fit HTFA to the full dataset to identify network nodes.

Plotting HTFA global and local node locations

We'll generate a plot where the global node locations are shown in black, and each subject's "local" node locations are shown in color (where each subject is assigned a different color). The nodes should be in similar (but not identical) locations across subjects. Note: if the number of nodes or iterations is small, or if the voxel and/or timepoint subsampling is high, the final result will tend to be close to the initialized values. Increase max_global_iter, max_local_iter, max_voxel, and max_tr to achieve greater accuracy.

Compute dynamic correlations

The timeseries of activations for each node, for each participant provide a low-dimensional embedding of the original data that we can use to efficiently examine dynamic connectivity patterns. Obtaining these embeddings requires some data wrangling.

Generate animated chord diagrams

The cells below generate interactive figures. After running each cell, move the sliders to change which timepoints are displayed. The next cell generates a figure that displays network dynamics during the "intact" story listening experimental condition, and the subsequent cell generates a figure that displays network dynamics during the "word-scrambled" experimental condition.

Generate animated brain network plots

Run the cells below to generate the animations. Individual animated frames may be found in the intact_frames and scrambled_frames sub-folder of this directory. The frames are stitched together into an mp4 file in order to display the animation in the notebook. You can right-click on the animations to save the files.

The next cell generates an animation for the "intact" experimental condition, and the subsequent cell generates an animation for the "word-scrambled" experimental condition.

Summary

Using HTFA, we were able to quickly and easily examine and compare network dynamic patterns in a large fMRI dataset, using only modest computing resources. The resulting networks are intuitive and straightforward to visualize.