Skip to content

Commit

Permalink
Migrating wiki contents from Google Code
Browse files Browse the repository at this point in the history
  • Loading branch information
GoogleCodeExporter committed Sep 28, 2015
0 parents commit ec3c515
Show file tree
Hide file tree
Showing 61 changed files with 6,231 additions and 0 deletions.
9 changes: 9 additions & 0 deletions AssociatedPublications.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Associated Publications #

Several papers have been published utilizing or associated with the MVPA toolbox. They are listed below.


|[Thomas Wolbers, Pavel Zahorik, Nicholas A. Giudice, Decoding the direction of auditory motion in blind humans, NeuroImage, In Press, Corrected Proof, Available online 5 May 2010, ISSN 1053-8119, DOI: 10.1016/j.neuroimage.2010.04.266.](http://www.sciencedirect.com/science/article/B6WNP-50106VT-2/2/53b8d0c71836af485b1e58c4c6915035)|
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|[Rissman, J., Greely, H.T., and Wagner, A.D. (2010) Detecting individual memories through the neural decoding of memory states and past experience. Proceedings of the National Academy of Sciences, USA, 107, 9849-9854.](http://www.pnas.org/content/107/21/9849) |
|[Coutanche, M. N., Thompson-Schill, S. L., & Schultz, R. T. (2011). Multi-voxel pattern analysis of fMRI data predicts clinical symptom severity. NeuroImage, 57(1), 113-123.](http://dx.doi.org/10.1016/j.neuroimage.2011.04.016) |
5 changes: 5 additions & 0 deletions ContactDetails.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### Contact details ###

See http://code.google.com/p/princeton-mvpa-toolbox/wiki/Downloads to obtain the latest version of the toolbox.

Please contact [email protected] for more information, suggestions or bug reports. Any feedback is appreciated. We'd be especially happy to hear from you if you're interested in contributing to the toolbox in any way. Currently this list is used to announce new releases of the package, discuss development details and answer questions from users.
40 changes: 40 additions & 0 deletions Downloads.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Downloads #

## Get the Toolbox ##
There are a number of ways to acquire the MVPA project code depending on your preferred usage.

You can also retrieve the [latest release](http://princeton-mvpa-toolbox.googlecode.com/files/mvpa1.1.tar.gz) of the toolbox.

The [most up-to-date SVN development version](http://code.google.com/p/princeton-mvpa-toolbox/source/checkout).

### Archive of old download versions ###

Note: We highly recommend upgrading to the latest version, as each upgrade is mostly backwards-compatible.

* [0.7.1](http://princeton-mvpa-toolbox.googlecode.com/files/mvpa.0.7.1.tar.gz)
* [0.7](http://princeton-mvpa-toolbox.googlecode.com/files/mvpa.0.7.tar.gz)
* [0.6.1](http://princeton-mvpa-toolbox.googlecode.com/files/mvpa.0.6.1.tar.gz)
* [0.6](http://princeton-mvpa-toolbox.googlecode.com/files/mvpa.0.6.tar.gz)

## Get the Data-sets ##

* the Haxby et al. (2001) [sample data](https://webapps.pni.princeton.edu/downloads/mvpa/afni_set.tar.gz) (340MB)
* [Analyze Format](https://webapps.pni.princeton.edu/downloads/mvpa/analyze_set.tar.gz) (340MB)
* [Nifti Format](https://webapps.pni.princeton.edu/downloads/mvpa/nifti_set.tar.gz) (340MB)
* md5 Checksums [right click to download](https://webapps.pni.princeton.edu/downloads/mvpa/md5mvpa.txt). (.1k) - Instructions for use found [here](https://compmem.princeton.edu/mvpa_docs/md5checksum).

Having downloaded them, follow [these instructions](Setup#Installation.md), and then you should be ready to start the [tutorial](TutorialIntro.md). [Contact us](ContactDetails.md) if you have trouble with the installation.

## Papers and Abstracts ##

[The Multi-Voxel Pattern Analysis (MVPA) toolbox](http://princeton-mvpa-toolbox.googlecode.com/files/detre_et_al_MVPA_OHBM2006.PDF) - [abstract](http://princeton-mvpa-toolbox.googlecode.com/files/PosterOhbm06.html) (2006)
(poster #50, presented on Wed June 15th in the afternoon)
Greg J Detre, Sean M Polyn, Christopher D Moore, Vaidehi S Natu, Benjamin D Singer, Jonathan D Cohen, James V Haxby, Kenneth A Norman

[A Matlab-based toolbox to facilitate multi-voxel pattern classification of fMRI data](http://princeton-mvpa-toolbox.googlecode.com/files/polyn_et_al_mvpa_ohbm2005.pdf) (2005)
Sean M Polyn, Greg J Detre, Sylvain Takerkart, Vaidehi S Natu, Michael S Benharrosh, Benjamin D Singer, Jonathan D Cohen, James V Haxby, Kenneth A Norman

## Important disclaimer ##
At this point in time, the Princeton MVPA toolbox is unsupported beta software, which we are making available to anyone who might find it useful. Because the software is still in beta form, there may be bugs and other teething problems. We do not take any responsibility for any problems that you might have related to use of the software. If you find a bug or have any further suggestions, you should let us know, but (given the presently unsupported status of the software), we may not be able to reply to all of the queries that we receive regarding the software.

[A list of all local downloads](http://code.google.com/p/princeton-mvpa-toolbox/downloads/list).
32 changes: 32 additions & 0 deletions EBCMain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# The Princeton EBC team #

Princeton submitted multiple entries to the [2006 Pittsburgh EBC fMRI analysis competition](http://pbc.lrdc.pitt.edu/). The competition's aim was to "challenge multiple groups to use state-of-the-art techniques to infer subjective experience from a rigorously collected set of fMRI".

Our entries were the result of collaboration among students, postdocs and faculty from multiple departments at Princeton, including Psychology, Physics, Computer Science, Electrical Engineering, and Applied Mathematics. Over the course of the competition, members of the EBC team met on a regular basis to discuss methods & share code. The team was comprised of the following researchers (in alphabetical order): David Blei, Eugene Brevdo, Ronald Bryan, Melissa Carroll, Denis Chigirev, Greg Detre, Andrew Engell, Shannon Hughes, Christopher Moore, Ehren Newman, Ken Norman, Vaidehi Natu, Susan Robison, Greg Stephens, Matt Weber, and David Weiss. The team was coordinated by Greg Detre (a Psychology graduate student) and supervised by Ken Norman (a Psychology faculty member).

![![](http://princeton-mvpa-toolbox.googlecode.com/files/Princeton_EBC_small.jpg)](http://princeton-mvpa-toolbox.googlecode.com/files/Princeton_EBC.jpg)


Back row from left to right: Ken Norman, Denis Chigirev, Matt Weber, Shannon Hughes, Eugene Brevdo, Melissa Carroll.
Front row: Chris Moore, Greg Detre, Greg Stephens.
Absent: David Blei, Ronald Bryan, Andrew Engell, Ehren Newman, Vaidehi Natu, Susan Robison.

## Biographies ##

### Denis Chigirev ###
a physics graduate student at Princeton University. After graduation from Physical Technical High School #566 in St. Petersburg, Russia, Denis won a top prize at International Physics Olympiad and was awarded a scholarship to study at the University of Chicago. He then spent a year at Trinity College, Cambridge University, before joining Princeton as a graduate student. There he became interested in problems at the intersection of physics, biology and computer science. He is currently finishing his dissertation.

### Greg Stephens ###
received Ph.D. in Physics (Cosmology) from the University of Maryland. He currently holds a joint postdoctoral position between the Physics Department and the Center for Brain, Mind and Behavior at Princeton University. Denis and Greg are also active members of the Biophysics Theory group headed by Bill Bialek.

## OHBM information ##

Melissa Carroll, Denis Chigirev, Greg Detre, Andy Engell, Ehren Newman and Greg Stephens will be attending the 2006 [Human Brain Mapping Conference](http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=1) in Florence, where they will be presenting:

[The Multi-Voxel Pattern Analysis (MVPA) toolbox - (50 W-PM)](http://princeton-mvpa-toolbox.googlecode.com/files/PosterOhbm06.html)

## The MVPA toolbox ##

Many members of the Princeton EBC team are also involved with the [MVPA toolbox](http://code.google.com/p/princeton-mvpa-toolbox) effort, which aims to create a Matlab-based toolbox to facilitate pattern classification and multivariate prediction analyses on fMRI neuroimaging data, supported by the [Princeton Neuroscience Institute](http://www.princeton.edu/neuroscience/) (formerly the Center for the Study of Brain, Mind and Behavior).

We have released a set of MVPA toolbox scripts tailored specifically to the EBC competition. See the [EBC Extension](ExpansionEBC.md) page for more information.
22 changes: 22 additions & 0 deletions ExpansionEBC.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# MVPA EBC Competition MVPA Toolbox Expansion #

As of version 0.8 of the toolbox, the EBC Expansion version 0.3 has been incorporated as part of the core MVPA files. This means that you no longer need to download and the extract a separate archive for the ebc files.

## Installation ##

Please follow the installation instructions in [Setup](Setup.md) to install the MVPA 1.0 Toolbox.

## Dataset ##

We have now posted the subset of the data used in the [EBC Tutorial](TutorialEBC.md) online. This dataset contains the complete EBC data for movies 1 and 2 and subjects 1, 2, and 3. It also contains a datafile with the optimal parameters used in the advanced tutorial (`ebc_tutorial_adv.m`). **You can download the data here:**

http://www.csbmb.princeton.edu/mvpa/downloads/ebc/ebc_movies1-2_subj1-3_params.tar.gz

## Tutorial ##

Once you've installed the toolbox and downloaded the dataset, please follow the [EBC Tutorial](TutorialEBC.md).


---

[Main Page](Main.md)
140 changes: 140 additions & 0 deletions Glossary.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
# Princeton Multi-Voxel Pattern Analysis ' glossary #

See [Manual](Manual.md) (Data structures section) for more information on terms relating to the way the data is stored by the toolbox.





## block ##
A group of contiguous [TR](#TR.md) from the same [condition](#condition.md) in a particular [run](#run.md). Usually comprises multiple behavioral [trials](#trial.md).

## classification ##
In the machine learning sense, classification means taking a labelled training data set and showing the classifier algorithm examples of each condition over and over until it can successfully identify the training data. Then, the classifier's generalization performance is tested by asking it to guess the conditions of new, unseen data points.

See: [Classification](ManualClassification.md) in the manual.


## condition ##
The groups that you're trying to teach your classifier to distinguish, e.g. different tasks being performed by the subject in the experiment, or different stimuli being viewed.

## cross-validation ##
When you use n-minus-one/leave-one-out cross-validation classification, you iterate over your data multiple times. Each [iteration](#iteration.md) involves a fresh classifier [trained](#condition.md) on a subset of the data, and tested on the withheld data.

See: [N-minus-one (leave-one-out) cross-validation](ManualClassification#N-minus-one_(leave-one-out)_cross-validation.md)

## feature selection ##
Deciding which of your features (e.g. voxels) you want to include in your analysis.

## generalization ##
Testing the performance of a trained classifier on previously-unseen (test) data

## header ##
See: [Data structure ' Book-keeping and the headers](ManualDataStructures#Book-keeping_and_the_headers.md)

## history ##
A free-text field in the [header](#header.md) that gets automatically appended to, creating a sort of narrative of that object's role in the analysis.

See [Data structure ' Book-keeping and the headers](ManualDataStructures#Book-keeping_and_the_headers.md)

## iteration ##
Running the classifier once, using a particular subset of the data for testing, and the remainder for training. For example, you have 10 runs, you'll have 10 iterations, each time withholding a different run as the testing data.

See: [n minus one cross validation](#n_minus_one_cross_validation.md)

## leave-one-out ##
We use 'leave-one-out' and 'n-minus-one' interchangeably to refer to the [cross-validation](#cross-validation.md) procedure that leaves out a different subsection (e.g. [run](#run.md)) of the data each [iteration](#iteration.md).

## mask ##
A boolean 3D (or maybe 2D) single-TR volume indicating which voxels are to be included.

See [Data structure ' masks](#mask.md).

## name ##
Every [object](#object.md) in the [\_subj\_](#subj.md) structure has a name. This is a very important field, since it is used whenever accessing that object. The user is advised to refrain from accessing objects directly (e.g. subj.patterns{1}).

See: [Data structure ' innards of the \_subj\_ structure](#The_innards_of.md) and [Advanced ' accessing \_subj\_ directly](#Accessing_the_subj.md)

## n minus one cross validation ##
We use 'leave-one-out' and 'n-minus-one' interchangeably to refer to the [cross-validation](#_cross-validation.md) procedure that leaves out a different subsection (e.g. [run](#run.md)) of the data each [iteration](#iteration.md).

## object ##
An example of one of the [4 main data types](#Data_structure.md), e.g. a single cell in _subj_._patterns_ or _subj.masks_. Contains a _mat_ field with all the data, as well other required fields such as [name](#name.md), group\_name, derived\_from, [header](#header.md) etc.

See: [The innards of the subj structure](#The_innards_of.md)

## one-of-n ##
In this toolbox, this tends to refer a regressors matrix, to the idea that only a single condition can be active at any timepoint. This makes sense for basic/standard classification ' each timepoint belongs to one or other of the conditions, but not more than one at once.

Convolving regressors with a hemodynamic response function will lead to continuous-valued regressors, which may overlap (i.e. more than one condition may be non-zero at a given timepoint), which may violate some functions' one-of-n requirements.

[Check\_1ofn\_regressors.m](http://code.google.com/p/princeton-mvpa-toolbox/source/browse/trunk/core/util/check_1ofn_regressors.m) allows you to test whether a matrix is one-of-n.

## pattern ##
A (features x timepoints) matrix, usually of voxel activities, but could also be PCA components, wavelet coefficients, GLM beta weights or a statmap.

See: [Data structure ' patterns](#pattern.md).

## peeking ##
When you use your testing data set to help with voxel selection. Basically, this is a kind of cheating, and spuriously/illegitimately improves your classification by some margin.

See: [Manual](Manual.md).

## performance ##
The performance metric measures the similarity between the output produced by a classifier to the output it's supposed to produce.

See [Performance](ManualClassification#Performance.md) in the Classification section of the manual.

## Pre-Classification ##
By this, we mean the normalization and feature selection steps that go on before after the data structure has been created but before beginning classification, e.g. [zscore\_runs.m](http://code.google.com/p/princeton-mvpa-toolbox/source/browse/trunk/core/preproc/zscore_runs.m) and [feature\_select.m](http://code.google.com/p/princeton-mvpa-toolbox/source/browse/trunk/core/preproc/feature_select.m).

See [pre-classification](ManualPreClassification.md).

## regressors ##
For our purposes, the term 'regressors' refers to a set of values for each TR that denote the extent to which each condition is active. Used by statistical tests, and also as the teacher signal for the classifiers.

See: [Data structure ' regressors](ManualDataStructures#Regressors.md).

## results ##
This is where all the information about [classification](#_condition.md) is stored.

See: [Classification ' results structure](ManualClassification#The_results_structure.md).

## run ##
A single scanning session. There are usually a handful of runs in a given hour-long experiment.

## selector ##
A set of labels for each [TR](#_TR.md), e.g. where all the [runs](#_run.md) start and finish, or which TRs should be used for [training](#_condition.md) and which for testing on this [iteration](#iteration.md).

See: [Data structure ' selectors](ManualDataStructures#Selectors.md).

## statmap ##
The result of some kind of statistical test, usually performed separately for each voxel. For instance, the ANOVA yields a statmap of p values, one for each voxel. Each p value denotes the probability that that voxel varies significantly between conditions.

Statmaps are stored as patterns, since the term 'mask' is usually used to refer to a boolean 3D volume.

A mask can be created from a statmap by choosing all the voxels that are above/below some threshold.

See [Data structure ' masks](ManualDataStructures#Masks.md) and [Pre-classification ' Statmaps](ManualPreClassification#Statmaps.md).

## subj ##
See: [Data structure ' selectors](ManualDataStructures#Selectors.md).

## testing ##
Presented a [trained](#training.md) classifier with patterns that it has never seen before, and testing its performance.

## TR ##
Stands for 'time to repetition'. Basically, the time taken for the scanner to acquire a single 3D brain volume. We often use it (somewhat imprecisely) to mean a single timepoint (usually of about 2s).

## training ##
Showing a classifier lots of examples of a person's brain in condition A, and telling it each time, 'This is an example of the brain in condition A'. We then show it lots of examples of the same brain in condition B, also telling it which condition these brain examples came from. This process repeats until the classifier has learned which are which.

In reality, the examples tend to be interleaved with each other and presented in a different order each time. Most classifier algorithms can also deal with more than just two categories.

## trial ##
A behavioural trial in the experiment, that probably spans multiple [TR](#TR.md)s. Multiple trials make up a [block](#block.md).

## voxel selection ##
Whenever you apply a _mask_ to a _pattern_, you are selecting voxels. This term tends to be used more often in the machine learning context of 'feature selection' ' choosing which of the features (voxels) contain signal for the classification problem you are attempting.

See: 'Pre-classification ' Anova' in the [Manual](Manual.md).
21 changes: 21 additions & 0 deletions Howtos.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Howto's and occasionally-asked questions #



This contains a set of scenarios that you might run into, and ways to achieve simple goals that might come up in your analysis. In almost all instances, there will be other ways of doing things. These suggested methods are designed to utilize the toolbox functionality to save you work. [Let us know](ContactDetails.md) if you think you have a better solution than the one provided.

## [Patterns](HowtosPattern.md) ##

## [Regressors](HowtosRegressors.md) ##

## [Masks](HowtosMasks.md) ##

## [Pre-Classification](HowtosPreClassification.md) ##

## [Classification](HowtosClassification.md) ##

## [Exporting](HowtosExporting.md) ##

## [Results](HowtosResults.md) ##

## [Miscellaneous](HowtosMisc.md) ##
Loading

0 comments on commit ec3c515

Please sign in to comment.