{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# fmrisim demo script\n", "\n", "Authors: Cameron T Ellis ([cameron.ellis@yale.edu](mailto:cameron.ellis@yale.edu)), Christopher Baldassano ([c.baldassano@columbia.edu](mailto:c.baldassano@columbia.edu)), Anna C Schapiro ([aschapir@sas.upenn.edu](mailto:aschapir@sas.upenn.edu)), Ming Bo Cai ([mingbo.cai@ircn.jp](mailto:mingbo.cai@ircn.jp)), Jonathan D Cohen ([jdc@princeton.edu](mailto:jdc@princeton.edu)) \n", "\n", "## Overview\n", "\n", "Example script to demonstrate fmrisim functionality. This generates data for a two condition, event-related design in which each condition evokes different activity within the same voxels. It then runs simple univariate and multivariate analyses on the data. \n", "\n", "If you would like a script that can be executed from the command line and scaffolds the simulation process, refer to the `fmrisim_real_time_generator.py` script in the `brainiak/utils` folder. \n", "\n", "\n", "## Annotated bibliography\n", "\n", "1. Ellis, C. T., Baldassano, C., Schapiro, A. C., Cai, M. B., Cohen, J. D. (2020). Facilitating open-science with realistic fMRI simulation: validation and application. *PeerJ* 8:e8564 [`link`](https://peerj.com/articles/8564/)\n", "*Describes and validates the fmrisim method. Applies it to a dataset to test alternative design parameters and evaluate how these parameters influence the effect size* \n", "\n", "2. Ellis, C. T., Lesnick, M., Henselman-Petrusek, G., Keller, B., & Cohen, J. D. (2019). Feasibility of topological data analysis for event-related fMRI, Network Neuroscience, 1-12 [`link`](https://www.mitpressjournals.org/doi/full/10.1162/netn_a_00095?mobileUi=0)\n", "*Example of using fmrisim to evaluate the plausibility of an analysis procedure under different signal parameters and design constraints* \n", "\n", "3. Kumar, S., Ellis, C., O'Connell, T. P., Chun, M. M., & Turk-Browne, N. B. (in press). Searching through functional space reveals distributed visual, auditory, and semantic coding in the human brain. *PLoS Computational Biology* [`link`](https://www.biorxiv.org/content/10.1101/2020.04.20.052175v1)\n", "*Example of using fmrisim to test different possible neural bases for an observed effect in real data* \n", "\n", "4. Welvaert, M., et al. (2011) neuRosim: An R package for generating fMRI data. *Journal of Statistical Software* 44, 1-18 [`link`](https://www.jstatsoft.org/article/view/v044i10)\n", "*A package in R for simulating fMRI data that was an inspiration for fmrisim*\n", "\n", "\n", "## Table of Contents\n", "[](#1-set-parameters)\n", "- [](#11-import-necessary-python-packages)\n", "- [](#12-load-participant-data) \n", "- [](#13-specify-participant-dimensions-and-resolution) \n", "- [](#14-generate-an-activity-template-and-a-mask) \n", "- [](#15-determine-noise-parameters) \n", "\n", "[2-generate-noise) \n", "- [](#21-create-temporal-noise) \n", "- [](#22-create-system-noise) \n", "- [](#23-combine-noise-and-template) \n", "- [](#24-fit-the-data-to-the-noise-parameters) \n", "\n", "[3-generate-signal) \n", "- [](#31-specify-which-voxels-in-the-brain-contain-signal) \n", "- [](#32-characterize-signal-for-voxels) \n", "- [](#33-generate-event-time-course) \n", "- [](#34-export-stimulus-time-course-for-analysis) \n", "- [](#35-estimate-the-voxel-weight-for-each-event) \n", "- [](#36-convolve-each-voxels-time-course-with-the-hemodynamic-response-function) \n", "- [](#37-establish-signal-magnitude) \n", "- [](#38-multiply-the-convolved-response-with-the-signal-voxels) \n", "- [](#39-combine-signal-and-noise) \n", "\n", "[4-analyse-data) \n", "- [](#41-pull-out-data-for-each-trial) \n", "- [](#42-represent-the-data) \n", "- [](#43-test-for-univariate-effect) \n", "- [](#44-test-for-a-multivariate-effect) \n", "\n", "[](#summary)\n", "\n", "[](#references)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Set parameters\n", "\n", "It is necessary to set various parameters that describe how the signal and the noise will be generated." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.1 Import necessary Python packages\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib notebook\n", "\n", "from pathlib import Path\n", "from brainiak.utils import fmrisim\n", "import nibabel\n", "import numpy as np\n", "import numpy.matlib\n", "import matplotlib.pyplot as plt\n", "import scipy.ndimage as ndimage\n", "import scipy.spatial.distance as sp_distance\n", "import sklearn.manifold as manifold\n", "import scipy.stats as stats\n", "import sklearn.model_selection\n", "import sklearn.svm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.2 Load participant data*\n", "\n", "Any 4 dimensional fMRI data that is readible by nibabel can be used as input to this pipeline. For this example, data is taken from the open access repository DataSpace: http://arks.princeton.edu/ark:/88435/dsp01dn39x4181. This file is unzipped and placed same directory as this notebook with the name Corr_MVPA " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "nii = nibabel.load('Corr_MVPA/Participant_01_rest_run01.nii')\n", "volume = nii.get_fdata()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.3 Specify participant dimensions and resolution\n", "\n", "The size of the volume and the resolution of the voxels must be specified (or extracted from the real data as is the case below)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Volume dimensions: (64, 64, 27, 294)\n", "TR duration: 1.50s\n" ] } ], "source": [ "dim = volume.shape # What is the size of the volume\n", "dimsize = nii.header.get_zooms() # Get voxel dimensions from the nifti header\n", "tr = dimsize[3]\n", "if tr > 100: # If high then these values are likely in ms and so fix it\n", " tr /= 1000\n", "print('Volume dimensions:', dim)\n", "print('TR duration: %0.2fs' % tr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.4 Generate an activity template and a mask\n", "\n", "Functions in fmrisim require a continuous map that describes the appropriate average MR value for each voxel in the brain and a mask which specifies voxels in the brain versus voxels outside of the brain. One way to generate both of these volumes is the mask_brain function. At a minimum, this takes as an input the fMRI volume to be simulated. To create the template this volume is averaged over time and bounded to a range from 0 to 1. In other words, voxels with a high value in the template have high activity over time. To create a mask, the template is thresholded. This threshold can be set manually or instead an appropriate value can be determined by looking for the minima between the two first peaks in the histogram of voxel values. If you would prefer, you could use the [compute_epi_mask](http://nilearn.github.io/modules/generated/nilearn.masking.compute_epi_mask.html) function in nilearn which uses a similar method." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "mask, template = fmrisim.mask_brain(volume=volume, \n", " mask_self=True,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.5 Determine noise parameters\n", "\n", "A critical step in the fmrisim toolbox is determining the noise parameters of the volume to be created. Many noise parameters are available for specification and if any are not set then they will default to reasonable values. As mentioned before, it is instead possible to provide raw fMRI data that will be used to estimate these noise parameters. The goal of the noise estimation is to calculate general descriptive statistics about the noise in the brain that are thought to be important. The simulations are then useful for understanding how signals will survive analyses when embedded in realistic neural noise. \n", "\n", "Now the disclaimers: the values here are only an estimate and will depend on noise properties combining in the ways assumed. In addition, because of the non-linearity and stochasticity of this simulation, this estimation is not fully invertible: if you generate a dataset with a set of noise parameters it will have similar but not the same noise parameters as a result. Moreover, complex interactions between brain regions that likely better describe brain noise are not modelled here: this toolbox pays no attention to regions of the brain or their interactions. Finally, for best results use raw fMRI because if the data has been preprocessed then assumptions this algorithm makes are likely to be erroneous. For instance, if the brain has been masked then this will eliminate variance in non-brain voxels which will mean that calculations of noise dependent on those voxels as a reference will fail.\n", "\n", "To ameliorate some of these concerns, it is possible to fit the spatial and temporal noise properties of the data. This iterates over the noise generation process and tunes parameters in order to match those that are provided. This is time consuming (especially for fitting the temporal noise) but is helpful in matching the specified noise properties. \n", "\n", "This toolbox separates noise in two: spatial noise and temporal noise. To estimate spatial noise both the smoothness and the amount of non-brain noise of the data must be quantified. For smoothness, the Full Width Half Max (FWHM) of the volume is averaged for the X, Y and Z dimension and then averaged across a sample of time points. To calculate the Signal to Noise Ratio (SNR) the mean activity in brain voxels for the middle time point is divided by the standard deviation in activity across non-brain voxels for that time point. For temporal noise an auto-regressive and moving average (ARMA) process is estimated, along with the overall size of temporal variability. A sample of brain voxels is used to estimate the first AR component and the first MA component of each voxel's activity over time using the statsmodels package. The Signal to Fluctuation Noise Ratio (SFNR) is calculated by dividing the average activity of voxels in the brain with that voxel’s noise (Friedman & Glover, 2006). That noise is calculated by taking the standard deviation of that voxel over time after it has been detrended with a second order polynomial. The SFNR then controls the amount of functional variability. Other types of noise can be generated, such as physiological noise, but are not estimated by this function.\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Calculate the noise parameters from the data. Set it up to be matched.\n", "noise_dict = {'voxel_size': [dimsize[0], dimsize[1], dimsize[2]], 'matched': 1}\n", "noise_dict = fmrisim.calc_noise(volume=volume,\n", " mask=mask,\n", " template=template,\n", " noise_dict=noise_dict,\n", " )" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Noise parameters of the data were estimated as follows:\n", "SNR: 23.175648200023815\n", "SFNR: 70.7171164884859\n", "FWHM: 5.661164924881941\n" ] } ], "source": [ "print('Noise parameters of the data were estimated as follows:')\n", "print('SNR: ' + str(noise_dict['snr']))\n", "print('SFNR: ' + str(noise_dict['sfnr']))\n", "print('FWHM: ' + str(noise_dict['fwhm']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **2. Generate noise**\n", "fmrisim can generate realistic fMRI noise when supplied with the appropriate inputs. A single function receives these inputs and deals with generating the noise. The necessary code to run this is in the next cell. For clarity, we walk through the steps of how this simulation is performed. " ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/cellis/anaconda/envs/brainiak_test/lib/python3.6/site-packages/scipy/stats/stats.py:2279: RuntimeWarning: divide by zero encountered in true_divide\n", " np.expand_dims(sstd, axis=axis))\n", "/Users/cellis/anaconda/envs/brainiak_test/lib/python3.6/site-packages/scipy/stats/stats.py:2279: RuntimeWarning: invalid value encountered in true_divide\n", " np.expand_dims(sstd, axis=axis))\n" ] } ], "source": [ "# Calculate the noise given the parameters\n", "noise = fmrisim.generate_noise(dimensions=dim[0:3],\n", " tr_duration=int(tr),\n", " stimfunction_tr=[0] * dim[3], \n", " mask=mask,\n", " template=template,\n", " noise_dict=noise_dict,\n", " )" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "application/javascript": "/* Put everything inside the global mpl namespace */\nwindow.mpl = {};\n\n\nmpl.get_websocket_type = function() {\n if (typeof(WebSocket) !== 'undefined') {\n return WebSocket;\n } else if (typeof(MozWebSocket) !== 'undefined') {\n return MozWebSocket;\n } else {\n alert('Your browser does not have WebSocket support. ' +\n 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n 'Firefox 4 and 5 are also supported but you ' +\n 'have to enable WebSockets in about:config.');\n };\n}\n\nmpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n this.id = figure_id;\n\n this.ws = websocket;\n\n this.supports_binary = (this.ws.binaryType != undefined);\n\n if (!this.supports_binary) {\n var warnings = document.getElementById(\"mpl-warnings\");\n if (warnings) {\n warnings.style.display = 'block';\n warnings.textContent = (\n \"This browser does not support binary websocket messages. \" +\n \"Performance may be slow.\");\n }\n }\n\n this.imageObj = new Image();\n\n this.context = undefined;\n this.message = undefined;\n this.canvas = undefined;\n this.rubberband_canvas = undefined;\n this.rubberband_context = undefined;\n this.format_dropdown = undefined;\n\n this.image_mode = 'full';\n\n this.root = $('
');\n this._root_extra_style(this.root)\n this.root.attr('style', 'display: inline-block');\n\n $(parent_element).append(this.root);\n\n this._init_header(this);\n this._init_canvas(this);\n this._init_toolbar(this);\n\n var fig = this;\n\n this.waiting = false;\n\n this.ws.onopen = function () {\n fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n fig.send_message(\"send_image_mode\", {});\n if (mpl.ratio != 1) {\n fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n }\n fig.send_message(\"refresh\", {});\n }\n\n this.imageObj.onload = function() {\n if (fig.image_mode == 'full') {\n // Full images could contain transparency (where diff images\n // almost always do), so we need to clear the canvas so that\n // there is no ghosting.\n fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n }\n fig.context.drawImage(fig.imageObj, 0, 0);\n };\n\n this.imageObj.onunload = function() {\n fig.ws.close();\n }\n\n this.ws.onmessage = this._make_on_message_function(this);\n\n this.ondownload = ondownload;\n}\n\nmpl.figure.prototype._init_header = function() {\n var titlebar = $(\n '
');\n var titletext = $(\n '
');\n titlebar.append(titletext)\n this.root.append(titlebar);\n this.header = titletext[0];\n}\n\n\n\nmpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n\n}\n\n\nmpl.figure.prototype._root_extra_style = function(canvas_div) {\n\n}\n\nmpl.figure.prototype._init_canvas = function() {\n var fig = this;\n\n var canvas_div = $('
');\n\n canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n\n function canvas_keyboard_event(event) {\n return fig.key_event(event, event['data']);\n }\n\n canvas_div.keydown('key_press', canvas_keyboard_event);\n canvas_div.keyup('key_release', canvas_keyboard_event);\n this.canvas_div = canvas_div\n this._canvas_extra_style(canvas_div)\n this.root.append(canvas_div);\n\n var canvas = $('');\n canvas.addClass('mpl-canvas');\n canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n\n this.canvas = canvas[0];\n this.context = canvas[0].getContext(\"2d\");\n\n var backingStore = this.context.backingStorePixelRatio ||\n\tthis.context.webkitBackingStorePixelRatio ||\n\tthis.context.mozBackingStorePixelRatio ||\n\tthis.context.msBackingStorePixelRatio ||\n\tthis.context.oBackingStorePixelRatio ||\n\tthis.context.backingStorePixelRatio || 1;\n\n mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n var rubberband = $('');\n rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n\n var pass_mouse_events = true;\n\n canvas_div.resizable({\n start: function(event, ui) {\n pass_mouse_events = false;\n },\n resize: function(event, ui) {\n fig.request_resize(ui.size.width, ui.size.height);\n },\n stop: function(event, ui) {\n pass_mouse_events = true;\n fig.request_resize(ui.size.width, ui.size.height);\n },\n });\n\n function mouse_event_fn(event) {\n if (pass_mouse_events)\n return fig.mouse_event(event, event['data']);\n }\n\n rubberband.mousedown('button_press', mouse_event_fn);\n rubberband.mouseup('button_release', mouse_event_fn);\n // Throttle sequential mouse events to 1 every 20ms.\n rubberband.mousemove('motion_notify', mouse_event_fn);\n\n rubberband.mouseenter('figure_enter', mouse_event_fn);\n rubberband.mouseleave('figure_leave', mouse_event_fn);\n\n canvas_div.on(\"wheel\", function (event) {\n event = event.originalEvent;\n event['data'] = 'scroll'\n if (event.deltaY < 0) {\n event.step = 1;\n } else {\n event.step = -1;\n }\n mouse_event_fn(event);\n });\n\n canvas_div.append(canvas);\n canvas_div.append(rubberband);\n\n this.rubberband = rubberband;\n this.rubberband_canvas = rubberband[0];\n this.rubberband_context = rubberband[0].getContext(\"2d\");\n this.rubberband_context.strokeStyle = \"#000000\";\n\n this._resize_canvas = function(width, height) {\n // Keep the size of the canvas, canvas container, and rubber band\n // canvas in synch.\n canvas_div.css('width', width)\n canvas_div.css('height', height)\n\n canvas.attr('width', width * mpl.ratio);\n canvas.attr('height', height * mpl.ratio);\n canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n\n rubberband.attr('width', width);\n rubberband.attr('height', height);\n }\n\n // Set the figure to an initial 600x600px, this will subsequently be updated\n // upon first draw.\n this._resize_canvas(600, 600);\n\n // Disable right mouse context menu.\n $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n return false;\n });\n\n function set_focus () {\n canvas.focus();\n canvas_div.focus();\n }\n\n window.setTimeout(set_focus, 100);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n var fig = this;\n\n var nav_element = $('
');\n nav_element.attr('style', 'width: 100%');\n this.root.append(nav_element);\n\n // Define a callback function for later on.\n function toolbar_event(event) {\n return fig.toolbar_button_onclick(event['data']);\n }\n function toolbar_mouse_event(event) {\n return fig.toolbar_button_onmouseover(event['data']);\n }\n\n for(var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n // put a spacer in here.\n continue;\n }\n var button = $('');\n button.click(method_name, toolbar_event);\n button.mouseover(tooltip, toolbar_mouse_event);\n nav_element.append(button);\n }\n\n // Add the status bar.\n var status_bar = $('');\n nav_element.append(status_bar);\n this.message = status_bar[0];\n\n // Add the close button to the window.\n var buttongrp = $('
');\n var button = $('');\n button.click(function (evt) { fig.handle_close(fig, {}); } );\n button.mouseover('Stop Interaction', toolbar_mouse_event);\n buttongrp.append(button);\n var titlebar = this.root.find($('.ui-dialog-titlebar'));\n titlebar.prepend(buttongrp);\n}\n\nmpl.figure.prototype._root_extra_style = function(el){\n var fig = this\n el.on(\"remove\", function(){\n\tfig.close_ws(fig, {});\n });\n}\n\nmpl.figure.prototype._canvas_extra_style = function(el){\n // this is important to make the div 'focusable\n el.attr('tabindex', 0)\n // reach out to IPython and tell the keyboard manager to turn it's self\n // off when our div gets focus\n\n // location in version 3\n if (IPython.notebook.keyboard_manager) {\n IPython.notebook.keyboard_manager.register_events(el);\n }\n else {\n // location in version 2\n IPython.keyboard_manager.register_events(el);\n }\n\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n var manager = IPython.notebook.keyboard_manager;\n if (!manager)\n manager = IPython.keyboard_manager;\n\n // Check for shift+enter\n if (event.shiftKey && event.which == 13) {\n this.canvas_div.blur();\n event.shiftKey = false;\n // Send a \"J\" for go to next cell\n event.which = 74;\n event.keyCode = 74;\n manager.command_mode();\n manager.handle_keydown(event);\n }\n}\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n fig.ondownload(fig, null);\n}\n\n\nmpl.find_output_cell = function(html_output) {\n // Return the cell and output element which can be found *uniquely* in the notebook.\n // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n // IPython event is triggered only after the cells have been serialised, which for\n // our purposes (turning an active figure into a static one), is too late.\n var cells = IPython.notebook.get_cells();\n var ncells = cells.length;\n for (var i=0; i= 3 moved mimebundle to data attribute of output\n data = data.data;\n }\n if (data['text/html'] == html_output) {\n return [cell, data, j];\n }\n }\n }\n }\n}\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel != null) {\n IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n}\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Plot spatial noise\n", "low_spatial = fmrisim._generate_noise_spatial(dim[0:3],\n", " fwhm=4.0,\n", " )\n", "\n", "high_spatial = fmrisim._generate_noise_spatial(dim[0:3],\n", " fwhm=1.0,\n", " )\n", "plt.figure()\n", "plt.subplot(1,2,1)\n", "plt.title('FWHM = 4.0')\n", "plt.imshow(low_spatial[:, :, 12])\n", "plt.axis('off')\n", "\n", "plt.subplot(1,2,2)\n", "plt.title('FWHM = 1.0')\n", "plt.imshow(high_spatial[:, :, 12])\n", "plt.axis('off')\n", "\n", "txt=\"Slices through the volume for different amounts of spatial noise. FWHM stands for full width half maximum, referencing the width of the Gaussian distribution used to simulate the data\"\n", "plt.figtext(0.5, 0.01, txt, wrap=True, horizontalalignment='center', fontsize=12);" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create the different types of noise\n", "total_time = 500\n", "timepoints = list(range(0, total_time, int(tr)))\n", "\n", "drift = fmrisim._generate_noise_temporal_drift(total_time,\n", " int(tr),\n", " )\n", "\n", "mini_dim = np.array([2, 2, 2])\n", "autoreg = fmrisim._generate_noise_temporal_autoregression(timepoints,\n", " noise_dict,\n", " mini_dim,\n", " np.ones(mini_dim),\n", " )\n", " \n", "phys = fmrisim._generate_noise_temporal_phys(timepoints,\n", " )\n", "\n", "stimfunc = np.zeros((int(total_time / tr), 1))\n", "stimfunc[np.random.randint(0, int(total_time / tr), 50)] = 1\n", "task = fmrisim._generate_noise_temporal_task(stimfunc,\n", " )" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "application/javascript": "/* Put everything inside the global mpl namespace */\nwindow.mpl = {};\n\n\nmpl.get_websocket_type = function() {\n if (typeof(WebSocket) !== 'undefined') {\n return WebSocket;\n } else if (typeof(MozWebSocket) !== 'undefined') {\n return MozWebSocket;\n } else {\n alert('Your browser does not have WebSocket support. ' +\n 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n 'Firefox 4 and 5 are also supported but you ' +\n 'have to enable WebSockets in about:config.');\n };\n}\n\nmpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n this.id = figure_id;\n\n this.ws = websocket;\n\n this.supports_binary = (this.ws.binaryType != undefined);\n\n if (!this.supports_binary) {\n var warnings = document.getElementById(\"mpl-warnings\");\n if (warnings) {\n warnings.style.display = 'block';\n warnings.textContent = (\n \"This browser does not support binary websocket messages. \" +\n \"Performance may be slow.\");\n }\n }\n\n this.imageObj = new Image();\n\n this.context = undefined;\n this.message = undefined;\n this.canvas = undefined;\n this.rubberband_canvas = undefined;\n this.rubberband_context = undefined;\n this.format_dropdown = undefined;\n\n this.image_mode = 'full';\n\n this.root = $('
');\n this._root_extra_style(this.root)\n this.root.attr('style', 'display: inline-block');\n\n $(parent_element).append(this.root);\n\n this._init_header(this);\n this._init_canvas(this);\n this._init_toolbar(this);\n\n var fig = this;\n\n this.waiting = false;\n\n this.ws.onopen = function () {\n fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n fig.send_message(\"send_image_mode\", {});\n if (mpl.ratio != 1) {\n fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n }\n fig.send_message(\"refresh\", {});\n }\n\n this.imageObj.onload = function() {\n if (fig.image_mode == 'full') {\n // Full images could contain transparency (where diff images\n // almost always do), so we need to clear the canvas so that\n // there is no ghosting.\n fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n }\n fig.context.drawImage(fig.imageObj, 0, 0);\n };\n\n this.imageObj.onunload = function() {\n fig.ws.close();\n }\n\n this.ws.onmessage = this._make_on_message_function(this);\n\n this.ondownload = ondownload;\n}\n\nmpl.figure.prototype._init_header = function() {\n var titlebar = $(\n '
');\n var titletext = $(\n '
');\n titlebar.append(titletext)\n this.root.append(titlebar);\n this.header = titletext[0];\n}\n\n\n\nmpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n\n}\n\n\nmpl.figure.prototype._root_extra_style = function(canvas_div) {\n\n}\n\nmpl.figure.prototype._init_canvas = function() {\n var fig = this;\n\n var canvas_div = $('
');\n\n canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n\n function canvas_keyboard_event(event) {\n return fig.key_event(event, event['data']);\n }\n\n canvas_div.keydown('key_press', canvas_keyboard_event);\n canvas_div.keyup('key_release', canvas_keyboard_event);\n this.canvas_div = canvas_div\n this._canvas_extra_style(canvas_div)\n this.root.append(canvas_div);\n\n var canvas = $('');\n canvas.addClass('mpl-canvas');\n canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n\n this.canvas = canvas[0];\n this.context = canvas[0].getContext(\"2d\");\n\n var backingStore = this.context.backingStorePixelRatio ||\n\tthis.context.webkitBackingStorePixelRatio ||\n\tthis.context.mozBackingStorePixelRatio ||\n\tthis.context.msBackingStorePixelRatio ||\n\tthis.context.oBackingStorePixelRatio ||\n\tthis.context.backingStorePixelRatio || 1;\n\n mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n var rubberband = $('');\n rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n\n var pass_mouse_events = true;\n\n canvas_div.resizable({\n start: function(event, ui) {\n pass_mouse_events = false;\n },\n resize: function(event, ui) {\n fig.request_resize(ui.size.width, ui.size.height);\n },\n stop: function(event, ui) {\n pass_mouse_events = true;\n fig.request_resize(ui.size.width, ui.size.height);\n },\n });\n\n function mouse_event_fn(event) {\n if (pass_mouse_events)\n return fig.mouse_event(event, event['data']);\n }\n\n rubberband.mousedown('button_press', mouse_event_fn);\n rubberband.mouseup('button_release', mouse_event_fn);\n // Throttle sequential mouse events to 1 every 20ms.\n rubberband.mousemove('motion_notify', mouse_event_fn);\n\n rubberband.mouseenter('figure_enter', mouse_event_fn);\n rubberband.mouseleave('figure_leave', mouse_event_fn);\n\n canvas_div.on(\"wheel\", function (event) {\n event = event.originalEvent;\n event['data'] = 'scroll'\n if (event.deltaY < 0) {\n event.step = 1;\n } else {\n event.step = -1;\n }\n mouse_event_fn(event);\n });\n\n canvas_div.append(canvas);\n canvas_div.append(rubberband);\n\n this.rubberband = rubberband;\n this.rubberband_canvas = rubberband[0];\n this.rubberband_context = rubberband[0].getContext(\"2d\");\n this.rubberband_context.strokeStyle = \"#000000\";\n\n this._resize_canvas = function(width, height) {\n // Keep the size of the canvas, canvas container, and rubber band\n // canvas in synch.\n canvas_div.css('width', width)\n canvas_div.css('height', height)\n\n canvas.attr('width', width * mpl.ratio);\n canvas.attr('height', height * mpl.ratio);\n canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n\n rubberband.attr('width', width);\n rubberband.attr('height', height);\n }\n\n // Set the figure to an initial 600x600px, this will subsequently be updated\n // upon first draw.\n this._resize_canvas(600, 600);\n\n // Disable right mouse context menu.\n $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n return false;\n });\n\n function set_focus () {\n canvas.focus();\n canvas_div.focus();\n }\n\n window.setTimeout(set_focus, 100);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n var fig = this;\n\n var nav_element = $('
');\n nav_element.attr('style', 'width: 100%');\n this.root.append(nav_element);\n\n // Define a callback function for later on.\n function toolbar_event(event) {\n return fig.toolbar_button_onclick(event['data']);\n }\n function toolbar_mouse_event(event) {\n return fig.toolbar_button_onmouseover(event['data']);\n }\n\n for(var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n // put a spacer in here.\n continue;\n }\n var button = $('');\n button.click(method_name, toolbar_event);\n button.mouseover(tooltip, toolbar_mouse_event);\n nav_element.append(button);\n }\n\n // Add the status bar.\n var status_bar = $('');\n nav_element.append(status_bar);\n this.message = status_bar[0];\n\n // Add the close button to the window.\n var buttongrp = $('
');\n var button = $('');\n button.click(function (evt) { fig.handle_close(fig, {}); } );\n button.mouseover('Stop Interaction', toolbar_mouse_event);\n buttongrp.append(button);\n var titlebar = this.root.find($('.ui-dialog-titlebar'));\n titlebar.prepend(buttongrp);\n}\n\nmpl.figure.prototype._root_extra_style = function(el){\n var fig = this\n el.on(\"remove\", function(){\n\tfig.close_ws(fig, {});\n });\n}\n\nmpl.figure.prototype._canvas_extra_style = function(el){\n // this is important to make the div 'focusable\n el.attr('tabindex', 0)\n // reach out to IPython and tell the keyboard manager to turn it's self\n // off when our div gets focus\n\n // location in version 3\n if (IPython.notebook.keyboard_manager) {\n IPython.notebook.keyboard_manager.register_events(el);\n }\n else {\n // location in version 2\n IPython.keyboard_manager.register_events(el);\n }\n\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n var manager = IPython.notebook.keyboard_manager;\n if (!manager)\n manager = IPython.keyboard_manager;\n\n // Check for shift+enter\n if (event.shiftKey && event.which == 13) {\n this.canvas_div.blur();\n event.shiftKey = false;\n // Send a \"J\" for go to next cell\n event.which = 74;\n event.keyCode = 74;\n manager.command_mode();\n manager.handle_keydown(event);\n }\n}\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n fig.ondownload(fig, null);\n}\n\n\nmpl.find_output_cell = function(html_output) {\n // Return the cell and output element which can be found *uniquely* in the notebook.\n // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n // IPython event is triggered only after the cells have been serialised, which for\n // our purposes (turning an active figure into a static one), is too late.\n var cells = IPython.notebook.get_cells();\n var ncells = cells.length;\n for (var i=0; i= 3 moved mimebundle to data attribute of output\n data = data.data;\n }\n if (data['text/html'] == html_output) {\n return [cell, data, j];\n }\n }\n }\n }\n}\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel != null) {\n IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n}\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Dilate the mask so as to only take voxels far from the brain (performed in calc_noise)\n", "mask_dilated = ndimage.morphology.binary_dilation(mask, iterations=10)\n", "\n", "# Remove all non brain voxels\n", "system_all = volume[mask_dilated == 0] # Pull out all the non brain voxels in the first TR\n", "system_baseline = volume - (template.reshape(dim[0], dim[1], dim[2], 1) * noise_dict['max_activity']) # Subtract the baseline before masking\n", "system_baseline = system_baseline[mask_dilated == 0]\n", "\n", "# Plot the distribution of voxels\n", "plt.figure(figsize=(10,8))\n", "plt.subplot(1, 3, 1)\n", "plt.hist(system_all[:,0].flatten(),100)\n", "plt.title('Non-brain distribution')\n", "plt.xlabel('Activity')\n", "plt.ylabel('Frequency')\n", "\n", "# Identify a subset of voxels to plot\n", "idxs = list(range(system_all.shape[0]))\n", "np.random.shuffle(idxs)\n", "\n", "temporal = system_all[idxs[:100], :100]\n", "plt.subplot(1, 3, 2)\n", "plt.imshow(temporal)\n", "plt.xticks([], [])\n", "plt.yticks([], [])\n", "plt.ylabel('voxel ID')\n", "plt.xlabel('time')\n", "plt.title('Voxel x time')\n", "\n", "# Plot the difference\n", "ax=plt.subplot(1, 3, 3)\n", "plt.hist(system_baseline[:,0].flatten(),100)\n", "ax.yaxis.tick_right()\n", "ax.yaxis.set_label_position(\"right\")\n", "plt.title('Demeaned non-brain distribution')\n", "plt.xlabel('Activity difference')\n", "\n", "txt=\"Histogram of non-brain voxel intensity. Left shows the raw intensity histogram, middle shows the voxel by time matrix (showing that there is variance in mean but not much in time) and then right shows the demeaned voxel intensity for the first time point.\"\n", "plt.figtext(0.5, 0.01, txt, wrap=True, horizontalalignment='center', fontsize=10);\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.3 Combine noise and template\n", " \n", "The template volume is used to estimate the appropriate baseline distribution of MR values. This estimate is then combined with the temporal noise and the system noise to make an estimate of the noise. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.4 Fit the data to the noise parameters\n", "\n", "The generate_noise function does its best to estimate the appropriate noise parameters using assumptions about noise sources; however, because of the complexity of these different noise types, it is often wrong. To compensate, fitting is performed in which parameters involved in the noise generation process are changed and the noise metrics are recalculated to see whether those changes helped the fit. Due to their importance, the parameters that can be fit are SNR, SFNR and AR.\n", "\n", "The fitting of SNR/SFNR involves reweighting spatial and temporal metrics of noise. This analysis is relatively quick because this reweighting does not require that any timecourses are recreated, only that they are reweighted. At least 10 iterations are recommended because the initial guesses tend to underestimate SFNR and SNR (although the size of this error depends on the data). In the case of fitting the AR, the MA rho is adjusted until the AR is appropriate and in doing so the timecourse needs to be recreated for each iteration. In the noise_dict, one of the keys is 'matched' which is a binary value determining whether any fitting will be done\n", "\n", "In terms of timing, for a medium size dataset (64x64x27x300 voxels) it takes approximately 2 minutes to generate the data when fitting on a Mac 2014 laptop. " ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Compute the noise parameters for the simulated noise\n", "noise_dict_sim = {'voxel_size': [dimsize[0], dimsize[1], dimsize[2]], 'matched': 1}\n", "noise_dict_sim = fmrisim.calc_noise(volume=noise,\n", " mask=mask,\n", " template=template,\n", " noise_dict=noise_dict_sim,\n", " )" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Compare noise parameters for the real and simulated noise:\n", "SNR: 23.18 vs 22.53\n", "SFNR: 70.72 vs 69.19\n", "FWHM: 5.66 vs 5.69\n", "AR: 0.86 vs 0.84\n" ] } ], "source": [ "print('Compare noise parameters for the real and simulated noise:')\n", "print('SNR: %0.2f vs %0.2f' % (noise_dict['snr'], noise_dict_sim['snr']))\n", "print('SFNR: %0.2f vs %0.2f' % (noise_dict['sfnr'], noise_dict_sim['sfnr']))\n", "print('FWHM: %0.2f vs %0.2f' % (noise_dict['fwhm'], noise_dict_sim['fwhm']))\n", "print('AR: %0.2f vs %0.2f' % (noise_dict['auto_reg_rho'][0], noise_dict_sim['auto_reg_rho'][0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **3. Generate signal**\n", "\n", "fmrisim can be used to generate signal in a number of different ways depending on the type of effect being simulated. Several tools are supplied to help with different types of signal that may be required; however, custom scripts may be necessary for unique effects. Below an experiment will be simulated in which two conditions, A and B, evoke different patterns of activity in the same set of voxels in the brain. This pattern does not manifest as a uniform change in activity across voxels but instead each condition evokes a consistent pattern across voxels. These conditions are randomly intermixed trial by trial. This code could be easily changed to instead compare univariate changes evoked by stimuli in different brain regions. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.1 Specify which voxels in the brain contain signal\n", "\n", "fmrisim provides tools to specify certain voxels in the brain that contain signal. The generate_signal function can produce regions of activity in a brain of different shapes, such as cubes, loops and spheres. Alternatively a volume could be loaded in that specifies the signal voxels (e.g. for ROIs from nilearn). The value of each voxel can be specified here, or set to be a random value." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create the region of activity where signal will appear\n", "coordinates = np.array([[21, 21, 21]]) # Where in the brain is the signal\n", "feature_size = 3 # How big, in voxels, is the size of the ROI\n", "signal_volume = fmrisim.generate_signal(dimensions=dim[0:3],\n", " feature_type=['cube'],\n", " feature_coordinates=coordinates,\n", " feature_size=[feature_size],\n", " signal_magnitude=[1],\n", " )" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "application/javascript": "/* Put everything inside the global mpl namespace */\nwindow.mpl = {};\n\n\nmpl.get_websocket_type = function() {\n if (typeof(WebSocket) !== 'undefined') {\n return WebSocket;\n } else if (typeof(MozWebSocket) !== 'undefined') {\n return MozWebSocket;\n } else {\n alert('Your browser does not have WebSocket support. ' +\n 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n 'Firefox 4 and 5 are also supported but you ' +\n 'have to enable WebSockets in about:config.');\n };\n}\n\nmpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n this.id = figure_id;\n\n this.ws = websocket;\n\n this.supports_binary = (this.ws.binaryType != undefined);\n\n if (!this.supports_binary) {\n var warnings = document.getElementById(\"mpl-warnings\");\n if (warnings) {\n warnings.style.display = 'block';\n warnings.textContent = (\n \"This browser does not support binary websocket messages. \" +\n \"Performance may be slow.\");\n }\n }\n\n this.imageObj = new Image();\n\n this.context = undefined;\n this.message = undefined;\n this.canvas = undefined;\n this.rubberband_canvas = undefined;\n this.rubberband_context = undefined;\n this.format_dropdown = undefined;\n\n this.image_mode = 'full';\n\n this.root = $('
');\n this._root_extra_style(this.root)\n this.root.attr('style', 'display: inline-block');\n\n $(parent_element).append(this.root);\n\n this._init_header(this);\n this._init_canvas(this);\n this._init_toolbar(this);\n\n var fig = this;\n\n this.waiting = false;\n\n this.ws.onopen = function () {\n fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n fig.send_message(\"send_image_mode\", {});\n if (mpl.ratio != 1) {\n fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n }\n fig.send_message(\"refresh\", {});\n }\n\n this.imageObj.onload = function() {\n if (fig.image_mode == 'full') {\n // Full images could contain transparency (where diff images\n // almost always do), so we need to clear the canvas so that\n // there is no ghosting.\n fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n }\n fig.context.drawImage(fig.imageObj, 0, 0);\n };\n\n this.imageObj.onunload = function() {\n fig.ws.close();\n }\n\n this.ws.onmessage = this._make_on_message_function(this);\n\n this.ondownload = ondownload;\n}\n\nmpl.figure.prototype._init_header = function() {\n var titlebar = $(\n '
');\n var titletext = $(\n '
');\n titlebar.append(titletext)\n this.root.append(titlebar);\n this.header = titletext[0];\n}\n\n\n\nmpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n\n}\n\n\nmpl.figure.prototype._root_extra_style = function(canvas_div) {\n\n}\n\nmpl.figure.prototype._init_canvas = function() {\n var fig = this;\n\n var canvas_div = $('
');\n\n canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n\n function canvas_keyboard_event(event) {\n return fig.key_event(event, event['data']);\n }\n\n canvas_div.keydown('key_press', canvas_keyboard_event);\n canvas_div.keyup('key_release', canvas_keyboard_event);\n this.canvas_div = canvas_div\n this._canvas_extra_style(canvas_div)\n this.root.append(canvas_div);\n\n var canvas = $('');\n canvas.addClass('mpl-canvas');\n canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n\n this.canvas = canvas[0];\n this.context = canvas[0].getContext(\"2d\");\n\n var backingStore = this.context.backingStorePixelRatio ||\n\tthis.context.webkitBackingStorePixelRatio ||\n\tthis.context.mozBackingStorePixelRatio ||\n\tthis.context.msBackingStorePixelRatio ||\n\tthis.context.oBackingStorePixelRatio ||\n\tthis.context.backingStorePixelRatio || 1;\n\n mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n var rubberband = $('');\n rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n\n var pass_mouse_events = true;\n\n canvas_div.resizable({\n start: function(event, ui) {\n pass_mouse_events = false;\n },\n resize: function(event, ui) {\n fig.request_resize(ui.size.width, ui.size.height);\n },\n stop: function(event, ui) {\n pass_mouse_events = true;\n fig.request_resize(ui.size.width, ui.size.height);\n },\n });\n\n function mouse_event_fn(event) {\n if (pass_mouse_events)\n return fig.mouse_event(event, event['data']);\n }\n\n rubberband.mousedown('button_press', mouse_event_fn);\n rubberband.mouseup('button_release', mouse_event_fn);\n // Throttle sequential mouse events to 1 every 20ms.\n rubberband.mousemove('motion_notify', mouse_event_fn);\n\n rubberband.mouseenter('figure_enter', mouse_event_fn);\n rubberband.mouseleave('figure_leave', mouse_event_fn);\n\n canvas_div.on(\"wheel\", function (event) {\n event = event.originalEvent;\n event['data'] = 'scroll'\n if (event.deltaY < 0) {\n event.step = 1;\n } else {\n event.step = -1;\n }\n mouse_event_fn(event);\n });\n\n canvas_div.append(canvas);\n canvas_div.append(rubberband);\n\n this.rubberband = rubberband;\n this.rubberband_canvas = rubberband[0];\n this.rubberband_context = rubberband[0].getContext(\"2d\");\n this.rubberband_context.strokeStyle = \"#000000\";\n\n this._resize_canvas = function(width, height) {\n // Keep the size of the canvas, canvas container, and rubber band\n // canvas in synch.\n canvas_div.css('width', width)\n canvas_div.css('height', height)\n\n canvas.attr('width', width * mpl.ratio);\n canvas.attr('height', height * mpl.ratio);\n canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n\n rubberband.attr('width', width);\n rubberband.attr('height', height);\n }\n\n // Set the figure to an initial 600x600px, this will subsequently be updated\n // upon first draw.\n this._resize_canvas(600, 600);\n\n // Disable right mouse context menu.\n $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n return false;\n });\n\n function set_focus () {\n canvas.focus();\n canvas_div.focus();\n }\n\n window.setTimeout(set_focus, 100);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n var fig = this;\n\n var nav_element = $('
');\n nav_element.attr('style', 'width: 100%');\n this.root.append(nav_element);\n\n // Define a callback function for later on.\n function toolbar_event(event) {\n return fig.toolbar_button_onclick(event['data']);\n }\n function toolbar_mouse_event(event) {\n return fig.toolbar_button_onmouseover(event['data']);\n }\n\n for(var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n // put a spacer in here.\n continue;\n }\n var button = $('');\n button.click(method_name, toolbar_event);\n button.mouseover(tooltip, toolbar_mouse_event);\n nav_element.append(button);\n }\n\n // Add the status bar.\n var status_bar = $('');\n nav_element.append(status_bar);\n this.message = status_bar[0];\n\n // Add the close button to the window.\n var buttongrp = $('
');\n var button = $('');\n button.click(function (evt) { fig.handle_close(fig, {}); } );\n button.mouseover('Stop Interaction', toolbar_mouse_event);\n buttongrp.append(button);\n var titlebar = this.root.find($('.ui-dialog-titlebar'));\n titlebar.prepend(buttongrp);\n}\n\nmpl.figure.prototype._root_extra_style = function(el){\n var fig = this\n el.on(\"remove\", function(){\n\tfig.close_ws(fig, {});\n });\n}\n\nmpl.figure.prototype._canvas_extra_style = function(el){\n // this is important to make the div 'focusable\n el.attr('tabindex', 0)\n // reach out to IPython and tell the keyboard manager to turn it's self\n // off when our div gets focus\n\n // location in version 3\n if (IPython.notebook.keyboard_manager) {\n IPython.notebook.keyboard_manager.register_events(el);\n }\n else {\n // location in version 2\n IPython.keyboard_manager.register_events(el);\n }\n\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n var manager = IPython.notebook.keyboard_manager;\n if (!manager)\n manager = IPython.keyboard_manager;\n\n // Check for shift+enter\n if (event.shiftKey && event.which == 13) {\n this.canvas_div.blur();\n event.shiftKey = false;\n // Send a \"J\" for go to next cell\n event.which = 74;\n event.keyCode = 74;\n manager.command_mode();\n manager.handle_keydown(event);\n }\n}\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n fig.ondownload(fig, null);\n}\n\n\nmpl.find_output_cell = function(html_output) {\n // Return the cell and output element which can be found *uniquely* in the notebook.\n // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n // IPython event is triggered only after the cells have been serialised, which for\n // our purposes (turning an active figure into a static one), is too late.\n var cells = IPython.notebook.get_cells();\n var ncells = cells.length;\n for (var i=0; i= 3 moved mimebundle to data attribute of output\n data = data.data;\n }\n if (data['text/html'] == html_output) {\n return [cell, data, j];\n }\n }\n }\n }\n}\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel != null) {\n IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n}\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Plot pattern of activity for each condition\n", "plt.figure()\n", "plt.subplot(1,2,1)\n", "plt.imshow(pattern_A)\n", "plt.ylabel('Voxels')\n", "plt.tick_params(which='both', left='off', labelleft='off', bottom='off', labelbottom='off')\n", "plt.xticks([])\n", "plt.xlabel('Condition A')\n", "\n", "plt.subplot(1,2,2)\n", "plt.imshow(pattern_B)\n", "plt.tick_params(which='both', left='off', labelleft='off', bottom='off', labelbottom='off')\n", "plt.xticks([])\n", "plt.xlabel('Condition B')\n", "\n", "txt=\"Randomly generated pattern that is used for simulating the response evoked by two different events\"\n", "plt.figtext(0.5, 0.01, txt, wrap=True, horizontalalignment='center', fontsize=12);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.3 Generate event time course\n", "\n", "generate_stimfunction can be used to specify the time points at which task stimulus events occur. The timing of events can be specified by describing the onset and duration of each event. Alternatively, it is possible to provide a path to a 3 column timing file, used by fMRI software packages like FSL, which specifies event onset, duration and weight. \n" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Set up stimulus event time course parameters\n", "event_duration = 2 # How long is each event\n", "isi = 7 # What is the time between each event\n", "burn_in = 1 # How long before the first event\n", "\n", "total_time = int(dim[3] * tr) + burn_in # How long is the total event time course\n", "events = int((total_time - ((event_duration + isi) * 2)) / ((event_duration + isi) * 2)) * 2 # How many events are there?\n", "onsets_all = np.linspace(burn_in, events * (event_duration + isi), events) # Space the events out\n", "np.random.shuffle(onsets_all) # Shuffle their order\n", "onsets_A = onsets_all[:int(events / 2)] # Assign the first half of shuffled events to condition A\n", "onsets_B = onsets_all[int(events / 2):] # Assign the second half of shuffled events to condition B\n", "temporal_res = 10.0 # How many timepoints per second of the stim function are to be generated?" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create a time course of events \n", "stimfunc_A = fmrisim.generate_stimfunction(onsets=onsets_A,\n", " event_durations=[event_duration],\n", " total_time=total_time,\n", " temporal_resolution=temporal_res,\n", " )\n", "\n", "stimfunc_B = fmrisim.generate_stimfunction(onsets=onsets_B,\n", " event_durations=[event_duration],\n", " total_time=total_time,\n", " temporal_resolution=temporal_res,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.4 Export stimulus time course for analysis\n", "\n", "If a time course of events is generated, as is the case here, it may be useful to store this in a certain format for future analyses. The `export_3_column` function can be used to export the time course to be a three column (event onset, duration and weight) timing file that might readable to FSL. Alternatively, the export_epoch_file function can be used to export numpy files that are necessary inputs for MVPA and FCMA in BrainIAK.\n" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "fmrisim.export_epoch_file(stimfunction=[np.hstack((stimfunc_A, stimfunc_B))],\n", " filename='epoch_file.npy',\n", " tr_duration=tr,\n", " temporal_resolution=temporal_res,\n", " )\n", "\n", "fmrisim.export_3_column(stimfunction=stimfunc_A,\n", " filename='Condition_A.txt',\n", " temporal_resolution=temporal_res,\n", " )\n", "\n", "fmrisim.export_3_column(stimfunction=stimfunc_B,\n", " filename='Condition_B.txt',\n", " temporal_resolution=temporal_res,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.5 Estimate the voxel weight for each event\n", "\n", "According to the logic of this example, each voxel carrying signal will respond a different amount for condition A and B. To simulate this we multiply a voxel’s response to each condition by the time course of events and then combine these to make a single time course. This time course describes each voxel’s response to signal over time." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Multiply each pattern by each voxel time course\n", "weights_A = np.matlib.repmat(stimfunc_A, 1, voxels).transpose() * pattern_A\n", "weights_B = np.matlib.repmat(stimfunc_B, 1, voxels).transpose() * pattern_B\n", "\n", "# Sum these time courses together\n", "stimfunc_weighted = weights_A + weights_B\n", "stimfunc_weighted = stimfunc_weighted.transpose()" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "application/javascript": "/* Put everything inside the global mpl namespace */\nwindow.mpl = {};\n\n\nmpl.get_websocket_type = function() {\n if (typeof(WebSocket) !== 'undefined') {\n return WebSocket;\n } else if (typeof(MozWebSocket) !== 'undefined') {\n return MozWebSocket;\n } else {\n alert('Your browser does not have WebSocket support. ' +\n 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n 'Firefox 4 and 5 are also supported but you ' +\n 'have to enable WebSockets in about:config.');\n };\n}\n\nmpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n this.id = figure_id;\n\n this.ws = websocket;\n\n this.supports_binary = (this.ws.binaryType != undefined);\n\n if (!this.supports_binary) {\n var warnings = document.getElementById(\"mpl-warnings\");\n if (warnings) {\n warnings.style.display = 'block';\n warnings.textContent = (\n \"This browser does not support binary websocket messages. \" +\n \"Performance may be slow.\");\n }\n }\n\n this.imageObj = new Image();\n\n this.context = undefined;\n this.message = undefined;\n this.canvas = undefined;\n this.rubberband_canvas = undefined;\n this.rubberband_context = undefined;\n this.format_dropdown = undefined;\n\n this.image_mode = 'full';\n\n this.root = $('
');\n this._root_extra_style(this.root)\n this.root.attr('style', 'display: inline-block');\n\n $(parent_element).append(this.root);\n\n this._init_header(this);\n this._init_canvas(this);\n this._init_toolbar(this);\n\n var fig = this;\n\n this.waiting = false;\n\n this.ws.onopen = function () {\n fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n fig.send_message(\"send_image_mode\", {});\n if (mpl.ratio != 1) {\n fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n }\n fig.send_message(\"refresh\", {});\n }\n\n this.imageObj.onload = function() {\n if (fig.image_mode == 'full') {\n // Full images could contain transparency (where diff images\n // almost always do), so we need to clear the canvas so that\n // there is no ghosting.\n fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n }\n fig.context.drawImage(fig.imageObj, 0, 0);\n };\n\n this.imageObj.onunload = function() {\n fig.ws.close();\n }\n\n this.ws.onmessage = this._make_on_message_function(this);\n\n this.ondownload = ondownload;\n}\n\nmpl.figure.prototype._init_header = function() {\n var titlebar = $(\n '
');\n var titletext = $(\n '
');\n titlebar.append(titletext)\n this.root.append(titlebar);\n this.header = titletext[0];\n}\n\n\n\nmpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n\n}\n\n\nmpl.figure.prototype._root_extra_style = function(canvas_div) {\n\n}\n\nmpl.figure.prototype._init_canvas = function() {\n var fig = this;\n\n var canvas_div = $('
');\n\n canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n\n function canvas_keyboard_event(event) {\n return fig.key_event(event, event['data']);\n }\n\n canvas_div.keydown('key_press', canvas_keyboard_event);\n canvas_div.keyup('key_release', canvas_keyboard_event);\n this.canvas_div = canvas_div\n this._canvas_extra_style(canvas_div)\n this.root.append(canvas_div);\n\n var canvas = $('');\n canvas.addClass('mpl-canvas');\n canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n\n this.canvas = canvas[0];\n this.context = canvas[0].getContext(\"2d\");\n\n var backingStore = this.context.backingStorePixelRatio ||\n\tthis.context.webkitBackingStorePixelRatio ||\n\tthis.context.mozBackingStorePixelRatio ||\n\tthis.context.msBackingStorePixelRatio ||\n\tthis.context.oBackingStorePixelRatio ||\n\tthis.context.backingStorePixelRatio || 1;\n\n mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n var rubberband = $('');\n rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n\n var pass_mouse_events = true;\n\n canvas_div.resizable({\n start: function(event, ui) {\n pass_mouse_events = false;\n },\n resize: function(event, ui) {\n fig.request_resize(ui.size.width, ui.size.height);\n },\n stop: function(event, ui) {\n pass_mouse_events = true;\n fig.request_resize(ui.size.width, ui.size.height);\n },\n });\n\n function mouse_event_fn(event) {\n if (pass_mouse_events)\n return fig.mouse_event(event, event['data']);\n }\n\n rubberband.mousedown('button_press', mouse_event_fn);\n rubberband.mouseup('button_release', mouse_event_fn);\n // Throttle sequential mouse events to 1 every 20ms.\n rubberband.mousemove('motion_notify', mouse_event_fn);\n\n rubberband.mouseenter('figure_enter', mouse_event_fn);\n rubberband.mouseleave('figure_leave', mouse_event_fn);\n\n canvas_div.on(\"wheel\", function (event) {\n event = event.originalEvent;\n event['data'] = 'scroll'\n if (event.deltaY < 0) {\n event.step = 1;\n } else {\n event.step = -1;\n }\n mouse_event_fn(event);\n });\n\n canvas_div.append(canvas);\n canvas_div.append(rubberband);\n\n this.rubberband = rubberband;\n this.rubberband_canvas = rubberband[0];\n this.rubberband_context = rubberband[0].getContext(\"2d\");\n this.rubberband_context.strokeStyle = \"#000000\";\n\n this._resize_canvas = function(width, height) {\n // Keep the size of the canvas, canvas container, and rubber band\n // canvas in synch.\n canvas_div.css('width', width)\n canvas_div.css('height', height)\n\n canvas.attr('width', width * mpl.ratio);\n canvas.attr('height', height * mpl.ratio);\n canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n\n rubberband.attr('width', width);\n rubberband.attr('height', height);\n }\n\n // Set the figure to an initial 600x600px, this will subsequently be updated\n // upon first draw.\n this._resize_canvas(600, 600);\n\n // Disable right mouse context menu.\n $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n return false;\n });\n\n function set_focus () {\n canvas.focus();\n canvas_div.focus();\n }\n\n window.setTimeout(set_focus, 100);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n var fig = this;\n\n var nav_element = $('
');\n nav_element.attr('style', 'width: 100%');\n this.root.append(nav_element);\n\n // Define a callback function for later on.\n function toolbar_event(event) {\n return fig.toolbar_button_onclick(event['data']);\n }\n function toolbar_mouse_event(event) {\n return fig.toolbar_button_onmouseover(event['data']);\n }\n\n for(var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n // put a spacer in here.\n continue;\n }\n var button = $('');\n button.click(method_name, toolbar_event);\n button.mouseover(tooltip, toolbar_mouse_event);\n nav_element.append(button);\n }\n\n // Add the status bar.\n var status_bar = $('');\n nav_element.append(status_bar);\n this.message = status_bar[0];\n\n // Add the close button to the window.\n var buttongrp = $('
');\n var button = $('');\n button.click(function (evt) { fig.handle_close(fig, {}); } );\n button.mouseover('Stop Interaction', toolbar_mouse_event);\n buttongrp.append(button);\n var titlebar = this.root.find($('.ui-dialog-titlebar'));\n titlebar.prepend(buttongrp);\n}\n\nmpl.figure.prototype._root_extra_style = function(el){\n var fig = this\n el.on(\"remove\", function(){\n\tfig.close_ws(fig, {});\n });\n}\n\nmpl.figure.prototype._canvas_extra_style = function(el){\n // this is important to make the div 'focusable\n el.attr('tabindex', 0)\n // reach out to IPython and tell the keyboard manager to turn it's self\n // off when our div gets focus\n\n // location in version 3\n if (IPython.notebook.keyboard_manager) {\n IPython.notebook.keyboard_manager.register_events(el);\n }\n else {\n // location in version 2\n IPython.keyboard_manager.register_events(el);\n }\n\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n var manager = IPython.notebook.keyboard_manager;\n if (!manager)\n manager = IPython.keyboard_manager;\n\n // Check for shift+enter\n if (event.shiftKey && event.which == 13) {\n this.canvas_div.blur();\n event.shiftKey = false;\n // Send a \"J\" for go to next cell\n event.which = 74;\n event.keyCode = 74;\n manager.command_mode();\n manager.handle_keydown(event);\n }\n}\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n fig.ondownload(fig, null);\n}\n\n\nmpl.find_output_cell = function(html_output) {\n // Return the cell and output element which can be found *uniquely* in the notebook.\n // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n // IPython event is triggered only after the cells have been serialised, which for\n // our purposes (turning an active figure into a static one), is too late.\n var cells = IPython.notebook.get_cells();\n var ncells = cells.length;\n for (var i=0; i= 3 moved mimebundle to data attribute of output\n data = data.data;\n }\n if (data['text/html'] == html_output) {\n return [cell, data, j];\n }\n }\n }\n }\n}\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel != null) {\n IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n}\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Prepare the data to be plotted\n", "response = signal_func[0:100,0] * 2\n", "downsample_A = stimfunc_A[0:int(100*temporal_res * tr):int(temporal_res * tr), 0]\n", "downsample_B = stimfunc_B[0:int(100*temporal_res * tr):int(temporal_res * tr), 0]\n", "\n", "# Display signal\n", "plt.figure(figsize=(5,6))\n", "plt.title('Example event time course and voxel response')\n", "Event_A = plt.plot(downsample_A, 'r', label='Event_A')\n", "Event_B = plt.plot(downsample_B, 'g', label='Event_B')\n", "Response = plt.plot(response, 'b', label='Response')\n", "plt.legend(loc=1)\n", "plt.yticks([],'')\n", "plt.xlabel('nth TR')\n", "\n", "txt=\"A signal voxel's response convolved with a double-gamma HRF. Event types are also shown.\"\n", "plt.figtext(0.5, 0.01, txt, wrap=True, horizontalalignment='center', fontsize=8);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.7 Establish signal magnitude\n", "\n", "When specifying the signal we must determine the amount of activity change each voxel undergoes. fmrisim contains a tool to allow you to choose between a variety of different metrics that you could use to scale the signal. For instance, we can calculate percent signal change (referred to as PSC) by taking the average activity of a voxel in the noise volume and multiplying the maximal activation of the signal by a percentage of this number. This metric doesn't take into account the variance in the noise but other metrics in this tool do. One metric that does take account of variance, and is used below, is the signal amplitude divided by the temporal variability. The choices that are available for computing the signal scale are based on Welvaert and Rosseel (2013)." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Specify the parameters for signal\n", "signal_method = 'CNR_Amp/Noise-SD'\n", "signal_magnitude = [0.5]\n", "\n", "# Where in the brain are there stimulus evoked voxels\n", "signal_idxs = np.where(signal_volume == 1)\n", "\n", "# Pull out the voxels corresponding to the noise volume\n", "noise_func = noise[signal_idxs[0], signal_idxs[1], signal_idxs[2], :].T" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Compute the signal appropriate scaled\n", "signal_func_scaled = fmrisim.compute_signal_change(signal_func,\n", " noise_func,\n", " noise_dict,\n", " magnitude=signal_magnitude,\n", " method=signal_method,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.8 Multiply the convolved response with the signal voxels\n", "\n", "If you have a time course of simulated response for one or more voxels and a three dimensional volume representing voxels that ought to respond to these events then apply_signal will combine these appropriately. This function multiplies each signal voxel in the brain by the convolved event time course. " ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": true }, "outputs": [], "source": [ "signal = fmrisim.apply_signal(signal_func_scaled,\n", " signal_volume,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.9 Combine signal and noise\n", "\n", "Since the brain signal is expected to be small and sparse relative to the noise, it is assumed sufficient to simply add the volume containing signal with the volume modeling noise to make the simulated brain. " ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": true }, "outputs": [], "source": [ "brain = signal + noise" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **4. Analyse data**\n", "\n", "Several tools are available for multivariate analysis in BrainIAK. These greatly speed up computation and are critical in some cases, such as a whole brain searchlight. However, for this example data we will only look at data in the ROI that we know contains signal and so do not need these advanced tools optimized for whole-brain analyses." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.1 Pull out data for each trial\n", "\n", "Identify which voxels are in the signal ROI by using the coordinates provided earlier. To identify the relevant timepoints, assume that the peak of the neural response occurs 4 - 6s after each event onset. Take the TR corresponding to this peak response as the TR for that trial. In longer event/block designs you might instead average over each event." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": true }, "outputs": [], "source": [ "hrf_lag = 4 # Assumed time from stimulus onset to HRF peak\n", "\n", "# Get the lower and upper bounds of the ROI\n", "lb = (coordinates - ((feature_size - 1) / 2)).astype('int')[0]\n", "ub = (coordinates + ((feature_size - 1) / 2) + 1).astype('int')[0]\n", "\n", "# Pull out voxels in the ROI for the specified timepoints\n", "trials_A = brain[lb[0]:ub[0], lb[1]:ub[1], lb[2]:ub[2], ((onsets_A + hrf_lag) / tr).astype('int')]\n", "trials_B = brain[lb[0]:ub[0], lb[1]:ub[1], lb[2]:ub[2], ((onsets_B + hrf_lag) / tr).astype('int')]\n", "\n", "# Reshape data for easy handling\n", "trials_A = trials_A.reshape((voxels, trials_A.shape[3]))\n", "trials_B = trials_B.reshape((voxels, trials_B.shape[3]))" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "application/javascript": "/* Put everything inside the global mpl namespace */\nwindow.mpl = {};\n\n\nmpl.get_websocket_type = function() {\n if (typeof(WebSocket) !== 'undefined') {\n return WebSocket;\n } else if (typeof(MozWebSocket) !== 'undefined') {\n return MozWebSocket;\n } else {\n alert('Your browser does not have WebSocket support. ' +\n 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n 'Firefox 4 and 5 are also supported but you ' +\n 'have to enable WebSockets in about:config.');\n };\n}\n\nmpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n this.id = figure_id;\n\n this.ws = websocket;\n\n this.supports_binary = (this.ws.binaryType != undefined);\n\n if (!this.supports_binary) {\n var warnings = document.getElementById(\"mpl-warnings\");\n if (warnings) {\n warnings.style.display = 'block';\n warnings.textContent = (\n \"This browser does not support binary websocket messages. \" +\n \"Performance may be slow.\");\n }\n }\n\n this.imageObj = new Image();\n\n this.context = undefined;\n this.message = undefined;\n this.canvas = undefined;\n this.rubberband_canvas = undefined;\n this.rubberband_context = undefined;\n this.format_dropdown = undefined;\n\n this.image_mode = 'full';\n\n this.root = $('
');\n this._root_extra_style(this.root)\n this.root.attr('style', 'display: inline-block');\n\n $(parent_element).append(this.root);\n\n this._init_header(this);\n this._init_canvas(this);\n this._init_toolbar(this);\n\n var fig = this;\n\n this.waiting = false;\n\n this.ws.onopen = function () {\n fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n fig.send_message(\"send_image_mode\", {});\n if (mpl.ratio != 1) {\n fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n }\n fig.send_message(\"refresh\", {});\n }\n\n this.imageObj.onload = function() {\n if (fig.image_mode == 'full') {\n // Full images could contain transparency (where diff images\n // almost always do), so we need to clear the canvas so that\n // there is no ghosting.\n fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n }\n fig.context.drawImage(fig.imageObj, 0, 0);\n };\n\n this.imageObj.onunload = function() {\n fig.ws.close();\n }\n\n this.ws.onmessage = this._make_on_message_function(this);\n\n this.ondownload = ondownload;\n}\n\nmpl.figure.prototype._init_header = function() {\n var titlebar = $(\n '
');\n var titletext = $(\n '
');\n titlebar.append(titletext)\n this.root.append(titlebar);\n this.header = titletext[0];\n}\n\n\n\nmpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n\n}\n\n\nmpl.figure.prototype._root_extra_style = function(canvas_div) {\n\n}\n\nmpl.figure.prototype._init_canvas = function() {\n var fig = this;\n\n var canvas_div = $('
');\n\n canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n\n function canvas_keyboard_event(event) {\n return fig.key_event(event, event['data']);\n }\n\n canvas_div.keydown('key_press', canvas_keyboard_event);\n canvas_div.keyup('key_release', canvas_keyboard_event);\n this.canvas_div = canvas_div\n this._canvas_extra_style(canvas_div)\n this.root.append(canvas_div);\n\n var canvas = $('');\n canvas.addClass('mpl-canvas');\n canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n\n this.canvas = canvas[0];\n this.context = canvas[0].getContext(\"2d\");\n\n var backingStore = this.context.backingStorePixelRatio ||\n\tthis.context.webkitBackingStorePixelRatio ||\n\tthis.context.mozBackingStorePixelRatio ||\n\tthis.context.msBackingStorePixelRatio ||\n\tthis.context.oBackingStorePixelRatio ||\n\tthis.context.backingStorePixelRatio || 1;\n\n mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n var rubberband = $('');\n rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n\n var pass_mouse_events = true;\n\n canvas_div.resizable({\n start: function(event, ui) {\n pass_mouse_events = false;\n },\n resize: function(event, ui) {\n fig.request_resize(ui.size.width, ui.size.height);\n },\n stop: function(event, ui) {\n pass_mouse_events = true;\n fig.request_resize(ui.size.width, ui.size.height);\n },\n });\n\n function mouse_event_fn(event) {\n if (pass_mouse_events)\n return fig.mouse_event(event, event['data']);\n }\n\n rubberband.mousedown('button_press', mouse_event_fn);\n rubberband.mouseup('button_release', mouse_event_fn);\n // Throttle sequential mouse events to 1 every 20ms.\n rubberband.mousemove('motion_notify', mouse_event_fn);\n\n rubberband.mouseenter('figure_enter', mouse_event_fn);\n rubberband.mouseleave('figure_leave', mouse_event_fn);\n\n canvas_div.on(\"wheel\", function (event) {\n event = event.originalEvent;\n event['data'] = 'scroll'\n if (event.deltaY < 0) {\n event.step = 1;\n } else {\n event.step = -1;\n }\n mouse_event_fn(event);\n });\n\n canvas_div.append(canvas);\n canvas_div.append(rubberband);\n\n this.rubberband = rubberband;\n this.rubberband_canvas = rubberband[0];\n this.rubberband_context = rubberband[0].getContext(\"2d\");\n this.rubberband_context.strokeStyle = \"#000000\";\n\n this._resize_canvas = function(width, height) {\n // Keep the size of the canvas, canvas container, and rubber band\n // canvas in synch.\n canvas_div.css('width', width)\n canvas_div.css('height', height)\n\n canvas.attr('width', width * mpl.ratio);\n canvas.attr('height', height * mpl.ratio);\n canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n\n rubberband.attr('width', width);\n rubberband.attr('height', height);\n }\n\n // Set the figure to an initial 600x600px, this will subsequently be updated\n // upon first draw.\n this._resize_canvas(600, 600);\n\n // Disable right mouse context menu.\n $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n return false;\n });\n\n function set_focus () {\n canvas.focus();\n canvas_div.focus();\n }\n\n window.setTimeout(set_focus, 100);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n var fig = this;\n\n var nav_element = $('
');\n nav_element.attr('style', 'width: 100%');\n this.root.append(nav_element);\n\n // Define a callback function for later on.\n function toolbar_event(event) {\n return fig.toolbar_button_onclick(event['data']);\n }\n function toolbar_mouse_event(event) {\n return fig.toolbar_button_onmouseover(event['data']);\n }\n\n for(var toolbar_ind in mpl.toolbar_items) {\n var name = mpl.toolbar_items[toolbar_ind][0];\n var tooltip = mpl.toolbar_items[toolbar_ind][1];\n var image = mpl.toolbar_items[toolbar_ind][2];\n var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n if (!name) {\n // put a spacer in here.\n continue;\n }\n var button = $('');\n button.click(method_name, toolbar_event);\n button.mouseover(tooltip, toolbar_mouse_event);\n nav_element.append(button);\n }\n\n // Add the status bar.\n var status_bar = $('');\n nav_element.append(status_bar);\n this.message = status_bar[0];\n\n // Add the close button to the window.\n var buttongrp = $('
');\n var button = $('');\n button.click(function (evt) { fig.handle_close(fig, {}); } );\n button.mouseover('Stop Interaction', toolbar_mouse_event);\n buttongrp.append(button);\n var titlebar = this.root.find($('.ui-dialog-titlebar'));\n titlebar.prepend(buttongrp);\n}\n\nmpl.figure.prototype._root_extra_style = function(el){\n var fig = this\n el.on(\"remove\", function(){\n\tfig.close_ws(fig, {});\n });\n}\n\nmpl.figure.prototype._canvas_extra_style = function(el){\n // this is important to make the div 'focusable\n el.attr('tabindex', 0)\n // reach out to IPython and tell the keyboard manager to turn it's self\n // off when our div gets focus\n\n // location in version 3\n if (IPython.notebook.keyboard_manager) {\n IPython.notebook.keyboard_manager.register_events(el);\n }\n else {\n // location in version 2\n IPython.keyboard_manager.register_events(el);\n }\n\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n var manager = IPython.notebook.keyboard_manager;\n if (!manager)\n manager = IPython.keyboard_manager;\n\n // Check for shift+enter\n if (event.shiftKey && event.which == 13) {\n this.canvas_div.blur();\n event.shiftKey = false;\n // Send a \"J\" for go to next cell\n event.which = 74;\n event.keyCode = 74;\n manager.command_mode();\n manager.handle_keydown(event);\n }\n}\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n fig.ondownload(fig, null);\n}\n\n\nmpl.find_output_cell = function(html_output) {\n // Return the cell and output element which can be found *uniquely* in the notebook.\n // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n // IPython event is triggered only after the cells have been serialised, which for\n // our purposes (turning an active figure into a static one), is too late.\n var cells = IPython.notebook.get_cells();\n var ncells = cells.length;\n for (var i=0; i= 3 moved mimebundle to data attribute of output\n data = data.data;\n }\n if (data['text/html'] == html_output) {\n return [cell, data, j];\n }\n }\n }\n }\n}\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel != null) {\n IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n}\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Calculate the distance matrix between trial types\n", "distance_matrix = sp_distance.squareform(sp_distance.pdist(np.vstack([trials_A.transpose(), trials_B.transpose()])))\n", "\n", "mds = manifold.MDS(n_components=2, dissimilarity='precomputed') # Fit the mds object\n", "coords = mds.fit(distance_matrix).embedding_ # Find the mds coordinates\n", "\n", "# Plot the data\n", "plt.figure()\n", "plt.scatter(coords[:, 0], coords[:, 1], c=['red'] * trials_A.shape[1] + ['green'] * trials_B.shape[1])\n", "plt.axis('off')\n", "plt.title('Low Dimensional Representation of conditions A and B')\n", "\n", "txt=\"Low dimensional representation of the two conditions. Different colors refer to different conditions\"\n", "plt.figtext(0.5, 0.01, txt, wrap=True, horizontalalignment='center', fontsize=12);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.3 Test for univariate effect*\n", "\n", "Do a t test to compare the means of the voxels between these two conditions to determine if there is a difference" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Mean difference between condition A and B: 3.14\n", "p value: 0.229\n" ] } ], "source": [ "mean_difference = (np.mean(trials_A,0) - np.mean(trials_B,0))\n", "ttest = stats.ttest_1samp(mean_difference, 0)\n", "\n", "print('Mean difference between condition A and B: %0.2f\\np value: %0.3f' % (mean_difference.mean(), ttest.pvalue))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.4 Test for a multivariate effect\n", "\n", "Use SVM from scikit-learn to estimate the classification accuracy between the conditions" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Classification accuracy between condition A and B: 0.700\n" ] } ], "source": [ "# Get the inputs to the SVM\n", "input_mat = np.vstack([trials_A.transpose(), trials_B.transpose()])\n", "input_labels = trials_A.shape[1] * [1] + trials_B.shape[1] * [0]\n", "\n", "# Set up the classifier\n", "X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(\n", " input_mat, input_labels, test_size=0.2, random_state=0)\n", "\n", "clf = sklearn.svm.SVC(kernel='linear', C=1, gamma='auto').fit(X_train, y_train)\n", "\n", "score = clf.score(X_test, y_test)\n", "print('Classification accuracy between condition A and B: %0.3f' % score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "Using fmrisim we were able to take in raw data and create a simulation with matched noise properties. This simulated data was used for designing an example experiment time course for a two condition event-related design, such as a localizer task.\n", "\n", "fmrisim can be used to optimize experimental design parameters (e.g., what is the best ISI given the time constraints of the experiment) and to pre-register an experiment preprocessing pipeline. \n", "\n", "More resources and examples of fmrisim usage can be found here: `brainiak/utils`, also please checkout thereferences below for published examples of fmrisim." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### **References**\n", "\n", "*References using fmrisim:*\n", "\n", "Ellis, C. T., Baldassano, C., Schapiro, A. C., Cai, M. B., Cohen, J. D. (2020). Facilitating open-science with realistic fMRI simulation: validation and application. PeerJ 8:e8564\n", "\n", "Ellis, C. T., Lesnick, M., Henselman-Petrusek, G., Keller, B., & Cohen, J. D. (2019). Feasibility of topological data analysis for event-related fMRI, Network Neuroscience, 1-12\n", "\n", "Kumar, S., Ellis, C., O'Connell, T. P., Chun, M. M., & Turk-Browne, N. B. (in press). Searching through functional space reveals distributed visual, auditory, and semantic coding in the human brain. PLoS Computational Biology\n", "\n", "\n", "*References mentioned in this notebook:*\n", "\n", "Biswal, B., et al. (1996) Reduction of physiological fluctuations in fMRI using digital filters. Magnetic Resonance in Medicine 35, 107-113\n", "\n", "Friedman, L. and Glover, G.H. (2006) Report on a multicenter fMRI quality assurance protocol. Journal of Magnetic Resonance Imaging 23, 827-839\n", "\n", "Friston, K.J., et al. (1998) Event-related fMRI: characterizing differential responses. Neuroimage 7, 30-40\n", "\n", "Gudbjartsson, H. and Patz, S. (1995) The Rician distribution of noisy MRI data. Magnetic resonance in medicine 34, 910-914\n", "\n", "Welvaert, M., et al. (2011) neuRosim: An R package for generating fMRI data. Journal of Statistical Software 44, 1-18\n", "\n", "Welvaert, M., & Rosseel, Y. (2013). On the definition of signal-to-noise ratio and contrast-to-noise ratio for fMRI data. PloS one, 8(11), e77089.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 1 }