Skip to content

Commit aa83507

Browse files
committedJul 25, 2024··
scheduled plugin update
1 parent 4da7ec0 commit aa83507

File tree

9 files changed

+1048
-18
lines changed

9 files changed

+1048
-18
lines changed
 

‎plugins/NFT/index.md

Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,77 @@ nav_order: 3
88
---
99
To view the plugin source code, please visit the plugin's [GitHub repository](https://github.com/sccn/NFT).
1010

11+
# Matlab Toolbox and EEGLAB plugin for Neuroelectromagnetic Forward Head Modeling
12+
13+
![Screenshot 2024-07-25 at 13 28 55](https://github.com/user-attachments/assets/8871c122-dba0-4e1d-a976-7a4a8f6f7c6b)
14+
15+
# What is NFT?
16+
17+
Neuroelectromagnetic Forward Modeling Toolbox (NFT) is a MATLAB toolbox
18+
for generating realistic head models from available data (MRI and/or
19+
electrode locations) and for computing numerical solutions for solving
20+
the forward problem of electromagnetic source imaging (Zeynep Akalin
21+
Acar & S. Makeig, 2010). NFT includes tools for segmenting scalp, skull,
22+
cerebrospinal fluid (CSF) and brain tissues from T1-weighted magnetic
23+
resonance (MR) images. The Boundary Element Method (BEM) is used for the
24+
numerical solution of the forward problem. After extracting the
25+
segmented tissue volumes, surface BEM meshes may be generated. When a
26+
subject MR image is not available, a template head model may be warped
27+
to 3-D measured electrode locations to obtain an individualized BEM head
28+
model. Toolbox functions can be called from either a graphic user
29+
interface (gui) compatible with EEGLAB (sccn.ucsd.edu/eeglab), or from
30+
the MATLAB command line. Function help messages and a user tutorial are
31+
included. The toolbox is freely available for noncommercial use and open
32+
source development under the GNU Public License.
33+
34+
# Why NFT?
35+
36+
The NFT is released under an open source license, allowing researchers
37+
to contribute and improve on the work for the benefit of the
38+
neuroscience community. By bringing together advanced head modeling and
39+
forward problem solution methods and implementations within an easy to
40+
use toolbox, the NFT complements EEGLAB, an open source toolkit under
41+
active development. Combined, NFT and EEGLAB form a freely available EEG
42+
(and in future, MEG) source imaging solution.
43+
44+
The toolbox implements the major aspects of realistic head modeling and
45+
forward problem solution from available subject information:
46+
47+
1. Segmentation of T1-weighted MR images: The preferred method of
48+
generating a realistic head model is to use a 3-D whole-head
49+
structural MR image of the subject's head. The toolbox can generate
50+
a segmentation of scalp, skull, CSF and brain tissues from a
51+
T1-weighted image.
52+
53+
2. High-quality BEM meshes: The accuracy of the BEM solution depends on
54+
the quality of the underlying mesh that models tissue
55+
conductance-change boundaries. To avoid numerical instabilities, the
56+
mesh must be topologically correct with no self-intersections. It
57+
should represent the surface using high-quality elements while
58+
keeping the number of elements as small as possible. The NFT can
59+
create high-quality linear surface BEM meshes from the head
60+
segmentation.
61+
62+
3. Warping a template head model: When a whole-head structural MR image
63+
of the subject is not available, a semi-realistic head model can be
64+
generated by warping a standard template BEM mesh to the digitized
65+
electrode coordinates (instead of vice versa).
66+
67+
4. Registration of electrode positions with the BEM mesh: The digitized
68+
electrode locations and the BEM mesh must be aligned to compute
69+
accurate forward problem solutions and lead field matrices.
70+
71+
5. Accurate high-performance forward problem solution: The NFT uses a
72+
high-performance BEM implementation from the open source METU-FP
73+
Toolkit for bioelectromagnetic field computations.
74+
75+
# Required Resources
76+
77+
Matlab 7.0 or later running under any operating system (Linux, Windows).
78+
A large amount of RAM is useful - at least 2 GB (4-8 GB recommended for
79+
forward problem solution of realistic head models). The Matlab Image
80+
Processing toolbox is also recommended.
81+
1182
Pre-compiled binaries for the following 3rd party programs are distributed
1283
within the NFT toolbox for convinience of the users. The binaries are compiled
1384
for 32 and 64 bit Linux distributions.
@@ -31,3 +102,21 @@ MATITK: Matlab and ITK
31102
homepage: http://www.sfu.ca/~vwchu/matitk.html
32103

33104
Note: The MATITK shared libraries are installed in the 'mfiles' directory.
105+
106+
# Download
107+
108+
To download the NFT, go to the [NFT download
109+
page](http://sccn.ucsd.edu/nft/).
110+
111+
# NFT User's Manual
112+
See the tutorial section for more information. [Click here to download the NFT User Manual as a PDF book](https://github.com/user-attachments/files/16383465/NFT_Tutorial.pdf)
113+
114+
Creation and documentation by: Zeynep Akalin Acar, Project Scientist, zeynep@sccn.ucsd.edu
115+
116+
# NFT Reference Paper
117+
118+
Zeynep Akalin Acar & Scott Makeig, [Neuroelectromagnetic Forward Head
119+
Modeling
120+
Toolbox](http://sccn.ucsd.edu/%7Escott/pdf/Zeynep_NFT_Toolbox10.pdf).
121+
<em>Journal of Neuroscience Methods</em>, 2010
122+

‎plugins/NIMA/index.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,11 @@ nav_order: 26
88
---
99
To view the plugin source code, please visit the plugin's [GitHub repository](https://github.com/sccn/NIMA).
1010

11+
![P159_separatealpha.png](images/P159_separatealpha.png)
12+
13+
The NIMA EEGLAB plugin
14+
-------------------------------------------------------------
15+
1116
NIMA stands for Nima's Images from Measure-projection Analysis. Measure
1217
Projection Toolbox (MPT) is a published method (Bigdely-Shamlo et al.,
1318
2013), and for his wiki page see [this
@@ -25,11 +30,8 @@ What you can do with the optional inputs (12/07/2018 updated)
2530
- Specifying which MRI image and blob/voxel-cluster projections to
2631
show.
2732

28-
![P159_separatealpha.png](images/P159_separatealpha.png)
29-
3033
GUI, Blobs, and Voxels
3134
----------------------
32-
3335
GUI image can be seen in the screenshot below. This visualization works
3436
on 3-D Gaussian-blurred dipole locations, called (probabilistic) *dipole
3537
density*, which requires two parameters to determine the spatial
@@ -91,4 +93,4 @@ FWHM = 8 mm, number of sigma to truncate Gaussian = 3. Top row, voxel
9193
plot. Bottom row, blob plot. From left to right, Alpha = 0.1, 0.3, 0.5,
9294
0.7, 0.9.
9395

94-
![Alphacomparison.png](images/Alphacomparison.png)
96+
![Alphacomparison.png](images/Alphacomparison.png)

‎plugins/PACTools/index.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,6 @@ nav_order: 17
88
---
99
To view the plugin source code, please visit the plugin's [GitHub repository](https://github.com/sccn/PACTools).
1010

11-
[![GitHub stars](https://img.shields.io/github/stars/sccn/PACTools?color=%235eeb34&logo=GithUb&logoColor=%23fafafa)](https://github.com/sccn/PACTools/stargazers)
12-
[![GitHub forks](https://img.shields.io/github/forks/sccn/PACTools?color=%23b3d9f5&logo=GitHub)](https://github.com/sccn/PACTools/network)
13-
[![GitHub issues](https://img.shields.io/github/issues/sccn/PACTools?color=%23fa251e&logo=GitHub)](https://github.com/sccn/PACTools/issues)
14-
![Twitter Follow](https://img.shields.io/twitter/follow/eeglab2?style=social)
15-
1611
# EEGLAB Event Related PACTools
1712
The Event Related PACTools (PACTools) is an EEGLAB plug-in to compute phase-amplitude coupling in single subject data.
1813
In addition to traditional methods to compute PAC, the plugin include the Instantaneuous and Event-Related implementation of the Mutual Information Phase-Amplitude Coupling Method (MIPAC) (see Martinez-Cancino et al 2019).

‎plugins/clean_rawdata/Documentation.md

Lines changed: 408 additions & 0 deletions
Large diffs are not rendered by default.

‎plugins/cleanline/index.md

Lines changed: 284 additions & 0 deletions
Large diffs are not rendered by default.

‎plugins/firfilt/index.md

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,17 +8,27 @@ nav_order: 27
88
---
99
To view the plugin source code, please visit the plugin's [GitHub repository](https://github.com/sccn/firfilt).
1010

11-
Documentation
11+
FIRFILT EEGLAB plugin
1212
-------------
13-
See [this page](https://eeglab.org/others/Firfilt_FAQ.html) or the [paper](https://home.uni-leipzig.de/biocog/eprints/widmann_a2015jneuroscimeth250_34.pdf) for documentation.
13+
The FIRfilt EEGLAB plugin is a tool used within the EEGLAB environment. FIRfilt is specifically designed for filtering EEG data using Finite Impulse Response (FIR) filters. Here are the key features and functionalities of the FIRfilt plugin:
1414

15-
Please reference to
15+
* FIR Filtering: FIRfilt provides a straightforward interface for applying FIR filters to EEG data. FIR filters are commonly used due to their stability and linear phase properties.
16+
* Filter Types: Users can create various types of FIR filters, including low-pass, high-pass, band-pass, and band-stop filters. This flexibility allows users to isolate specific frequency bands of interest.
17+
* Design Methods: FIRfilt offers several methods for designing FIR filters, such as the window method, least-squares method, and equiripple method. Each method has its own advantages depending on the specific filtering requirements.
18+
* Graphical Interface: The plugin integrates with the EEGLAB GUI, making it accessible for users who prefer graphical user interfaces for their data processing tasks.
19+
* Command Line Support: For more advanced users, FIRfilt also supports command-line operations, allowing for script-based automation and integration into larger data processing pipelines.
20+
21+
See [this page](https://eeglab.org/others/Firfilt_FAQ.html) or the [paper](https://home.uni-leipzig.de/biocog/eprints/widmann_a2015jneuroscimeth250_34.pdf) for additional documentation.
22+
23+
Reference
24+
-------------
25+
Please cite
1626

1727
> Widmann, A., Schröger, E., & Maess, B. (2015). Digital filter design for electrophysiological data - a practical approach. Journal of Neuroscience Methods, 250, 34-46.
1828
19-
if you used functions from the EEGLAB firfilt plugin in your manuscript.
29+
if you have used functions from the EEGLAB firfilt plugin in your manuscript.
2030

21-
Version history
31+
Version History
2232
---------------
2333
v2.8 - Added usefftfilt option to pop_eegfiltnew()
2434

‎plugins/get_chanlocs/Documentation.md

Lines changed: 243 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
---
2+
layout: default
3+
title: Documentation
4+
long_title: Documentation
5+
parent: get_chanlocs
6+
grand_parent: Plugins
7+
---
8+
<h3>
9+
10+
<b>*get_chanlocs*: Compute 3-D electrode positions from a 3-D head image
11+
==\> <u>[Download the *get_chanlocs* User Guide](https://sccn.ucsd.edu/eeglab/download/Get_chanlocs_userguide.pdf)</u></b>
12+
13+
</h3>
14+
15+
![](Get_chanlocs.jpg)
16+
17+
### What is *get_chanlocs*?
18+
19+
The *get_chanlocs* EEGLAB plug-in is built on functions in
20+
[FieldTrip](http://www.fieldtriptoolbox.org/) to locate 3-D electrode
21+
positions from a 3-D scanned head image. Robert Oostenveld, originator
22+
of the FieldTrip toolbox, alerted us in 2017 that he and his students in
23+
Nijmegen had put functions into FieldTrip to compute positions of scalp
24+
electrodes from the recorded 3-D images for one 3-D camera, the
25+
[Structure scanner](https://structure.io/) mounted to an Apple iPad.
26+
(Read [Homölle and Oostenveld
27+
(2019)](https://doi.org/10.1016/j.jneumeth.2019.108378) and [notes on
28+
the incorporated FieldTrip
29+
functions](http://www.fieldtriptoolbox.org/tutorial/electrode/)). We at
30+
SCCN have created an EEGLAB plug-in extension, *get_chanlocs*, to ease
31+
the process of digitizing the positions of the electrodes from the
32+
acquired 3-D and entering them into the *EEG.chanlocs* data structure
33+
for use with other EEGLAB (plotting and source localization) functions
34+
that require electrode position information.
35+
36+
The <b>major advantages</b> of using <em>get_chanlocs</em> to measure
37+
electrode positions are that: 1) <b>the 3D image can be recorded quickly
38+
(\<1 min)</b>, thereby saving precious subject time (and attention
39+
capacity) better used to record EEG data! The researchers who have been
40+
most enthusiastic to hear about <em>get_chanlocs</em> are those
41+
collecting data from children and infants -- though even normal adult
42+
participants must feel less cognitive capacity for the experimental
43+
tasks after sitting, wearing the EEG montage, for 20 min while research
44+
assistants record the 3D location of each scalp electrode. 2) <b>The 3D
45+
image connects the electrode locations to the head fidicuals in a very
46+
concrete and permanent way</b>; future improved head modeling will be
47+
able to use the 3D head surface scans to fit to subject MR images or to
48+
warp template head models to the actual subject head. 3) Unlike with
49+
wand-based electrode localizing (neurologists call this electrode
50+
'digitizing'), <b>retaining the 3D head image allows rechecking the
51+
electrode positions</b> (e.g., if some human error occurs on first
52+
readout).
53+
54+
In brief, the process is as follows:
55+
56+
<b>Scanning the head surface:</B> A 3-D head image (3-D head ‘scan’) is
57+
acquired using the Structure scanner showing the subject wearing the
58+
electrode cap; this image acquisition typically requires a minute or
59+
less to perform. The resulting 3-D *.obj* image file is stored along
60+
with the EEG data. *get_chanlocs* also supports use of *.obj* 3D image
61+
files obtained using the [itSeez3D scanning app](https://itseez3d.com/),
62+
which we have found to be easier to capture good 3D images with than the
63+
Structure scanner's native app (Suggestion: Ask iSeez3D about a
64+
non-commercial license).
65+
66+
<B>Localizing the electrodes in the 3D scan:</B> When the data are to be
67+
analyzed, the *get_chanlocs* plug-in, called from the Matlab command
68+
line or EEGLAB menu, guides the data analyst through the process of
69+
loading the recorded 3-D head image and then clicking on each of the
70+
electrodes in the image in a pre-planned order to compute and store
71+
their 3-D positions relative to 3 fidicual points on the head (bridge of
72+
nose and ears). (Note: in future, this digitizing step may be automated
73+
at some point in the future using a machine vision approach). The
74+
electrode labels and their 3-D positions relative to the three skull
75+
landmarks (‘fiducial points’) are then written directly into the dataset
76+
*EEG.chanlocs* structure. During this process, a montage template
77+
created for the montage used in the recorded experiment can be shown by
78+
*get_chanlocs* as a convenient visual reference to speed and minimize
79+
human error in the electrode digitization process.
80+
81+
<B>User Guide</B> See the illustrated [*get_chanlocs* User
82+
Guide](https://sccn.ucsd.edu/mediawiki/images/5/5f/Get_chanlocs_userguide.pdf) for details.
83+
84+
<B>Uses:</B> Once the digitized electrode positions have been stored in
85+
the dataset, further (scalp field plotting and source localization)
86+
processes can use the digitized positions.
87+
88+
<b>Ethical considerations:</B> An institutional review board (or
89+
equivalent ethics review body) will likely consider head images as
90+
personally identifiable information. <b>Here is the IRB-approved [UCSD
91+
subject Consent
92+
form](/Media:Get_chanlocs_sampleConsent.pdf "wikilink")</B>, allowing
93+
participants to consent to different degrees of use of their 3D head
94+
image, that we use at SCCN.
95+
96+
### Why *get_chanlocs*?
97+
98+
To achieve <b>high-resolution EEG (effective) source imaging</b>
99+
requires (a) <b>an accurate 3-D electrical head model</b>, and (b)
100+
<b>accurate co-registration of the 3-D scalp electrode positions to the
101+
head model</b>. Several packages are available for fashioning a
102+
geometrically accurate head model from an anatomic MR head image. We use
103+
Zeynep Akalin Acar's [Neuromagnetic Forward problem Toolbox
104+
(NFT)](https://sccn.ucsd.edu/wiki/NFT), which she is now coupling to the
105+
first non-invasive, universally applicable method (SCALE) for estimating
106+
individual skull conductivity from EEG data (Akalin Acar et al., 2016;
107+
more news of this soon!). When a subject MR head image is *not*
108+
available, equivalent dipole models for independent component brain
109+
sources can use a template head model. Zeynep has shown that the dipole
110+
position fitting process is more accurate when the template head is
111+
warped to fit the actual 3-D positions of the electrodes -- IF these are
112+
recorded accurately. This kind of warping is performed in Zeynep's
113+
[**NFT** toolbox for EEGLAB](https://sccn.ucsd.edu/wiki/NFT).
114+
115+
For too long, it has been expensive and/or time consuming (for both
116+
experimenter and subject) to record (or 'digitize') the 3-D positions of
117+
the scalp electrodes for each subject. In recent years, however, cameras
118+
capable of recording images in 3-D have appeared and are now becoming
119+
cheaper and more prevalent. Robert Oostenveld, originator of the
120+
FieldTrip toolbox, alerted us that he and his students in Nijmegen had
121+
added functions to FieldTrip to compute the 3-D positions of scalp
122+
electrodes from scanned 3-D images acquired by one such camera, the
123+
[Structure scanner](https://store.structure.io/store) mounted to an
124+
Apple iPad.
125+
126+
Recording the actual electrode positions in a 3-D head image minimizes
127+
the time spent by the experimenter and subject on electrode position
128+
recording during the recording session to a minute or less, while also
129+
minimizing position digitizing system cost (to near $1000) and the space
130+
required (to an iPad-sized scanner plus enough space to walk around the
131+
seated subject holding the scanner). Digitizing the imaged electrode
132+
positions during data preprocessing is made convenient in *get_chanlocs*
133+
by using a montage template. In future, we anticipate an automated
134+
template-matching app will reduce time required to simply checking the
135+
results of an automated procedure.
136+
137+
Required Resources
138+
------------------
139+
140+
The *get_chanlocs* plug-in has been tested under Matlab 9.1 (R2016b) on
141+
Windows 10 as well as OS X 10.10.5. Please provide feedback concerning
142+
any incompatibilities, bugs, or feature suggestions using the [GitHub
143+
issue tracker](https://github.com/cll008/get_chanlocs/issues/).
144+
145+
<b>Scanning software:</B> In theory, any combination of 3-D scanning
146+
hardware and software that produces a Wavefront OBJ file (.obj) with the
147+
corresponding material texture library (.mtl) and JPEG (.jpg) files can
148+
be used for the plug-in. *get_chanlocs* has only been tested with head
149+
models produced by the [Structure Sensor
150+
camera](https://store.structure.io/store) attached to an iPad Air (model
151+
A1474). We use the default [calibrator
152+
app](https://itunes.apple.com/us/app/structure-sensor-calibrator/id914275485?mt=8)
153+
to align the Sensor camera and the tablet camera, and both the default
154+
scanning software
155+
([Scanner](https://itunes.apple.com/us/app/scanner-structure-sensor-sample/id891169722?mt=8))
156+
and a third-party scanning software ([itSeez3D](https://itseez3d.com/)).
157+
158+
<b>Scanner vs. itSeez3D:</B> While the default scanning app
159+
([Scanner](https://itunes.apple.com/us/app/scanner-structure-sensor-sample/id891169722?mt=8))
160+
is free and produces models that are of high enough quality for the
161+
plug-in, we find the third-party app ([itSeez3D](https://itseez3d.com/))
162+
easier to use. It seems to be more robust, providing better tracking and
163+
faster scans while minimizing the effects of adverse lighting
164+
conditions. itSeez3D features a user friendly online account system for
165+
accessing high-resolution models that are processed on their cloud
166+
servers. Users may contact [itSeez3D](mailto:support@itseez3d.com) to
167+
change processing parameters; for *get_chanlocs*, we found that
168+
increasing the model polygon count beyond 400,000 results in longer
169+
processing time without providing an appreciable increase in resolution.
170+
Unfortunately, while scanning is free, exporting models (required for
171+
*get_chanlocs*) has a [per export or subscription
172+
cost](https://itseez3d.com/pricing.html). Please contact
173+
[itSeez3D](mailto:support@itseez3d.com) regarding discounts for
174+
educational institutions and other non-commercial purposes.
175+
176+
Common Issues
177+
-------------
178+
179+
<b>Incorrect units in resulting electrode locations:</b> 3-D .obj model
180+
units are estimated by relating the range of the recorded vertex
181+
coordinates to an average-sized head: a captured model that is much
182+
larger or smaller than average will cause errors. If your project
183+
requires scanning an atypically-sized model (e.g. large bust scan
184+
including ECG electrode, arm scan for EMG sleeve, etc.), manually set
185+
obj.unit - [instead of using
186+
*ft_determine_units*](https://github.com/cll008/get_chanlocs/blob/master/private/ft_convert_units.m#L86)
187+
- to the correct unit used by your scanner {'m','dm','cm','mm'} to avoid
188+
complications.
189+
190+
<b>Keyboard settings:</b> Key presses are used to rotate 3-D head models
191+
when selecting electrode locations in *get_chanlocs*. Key press
192+
parameters should be adjusted per user discretion: macOS and Windows
193+
systems have adjustable Keyboard Properties, where 'Repeat delay' and
194+
'Repeat rate' may be modified. For some versions of macOS, long key
195+
presses will instead bring up an accent selection menu; in such cases,
196+
repeated single key presses can be used to control MATLAB, or users may
197+
disable the accent selection menu and enable repeating keys by typing
198+
(or pasting) the following in the terminal:
199+
`defaults write -g ApplePressAndHoldEnabled -bool false`
200+
201+
One way to circumvent this issue is to use the 3-D figure rotation tool
202+
in MATLAB. First select the rotation tool, then mark electrodes by
203+
clicking as normal; to rotate the model, hold the click after selecting
204+
an electrode and drag the mouse; else, be sure to press 'r' to remove
205+
points as necessary.
206+
207+
<b>Low resolution in head model:</b> Models will have lowered resolution
208+
in MATLAB due to how 3-D .obj are imported and handled, even if they
209+
have show a reasonable resolution in other 3-D modeling software (e.g.
210+
Paint 3D). Increase the polygon count of the model to circumvent this
211+
issue (we recommend 400,000 uniform polygons for itSeez3D).
212+
213+
Download
214+
--------
215+
216+
To download *get_chanlocs*, use the extension manager within EEGLAB.
217+
Alternatively, plug-ins are available for manual download from the
218+
[EEGLAB plug-in
219+
list](https://sccn.ucsd.edu/eeglab/plugin_uploader/plugin_list_all.php).
220+
221+
Revision History
222+
----------------
223+
224+
Please check the [commit
225+
history](https://github.com/cll008/get_chanlocs/commits/master) of the
226+
plug-in's GitHub repository.
227+
228+
*get_chanlocs* User Guide
229+
-------------------------
230+
231+
View/download the [*get_chanlocs* User
232+
Guide](https://sccn.ucsd.edu/eeglab/download/Get_chanlocs_userguide.pdf)
233+
234+
<div align=left>
235+
236+
Creation and documentation by:
237+
238+
**Clement Lee**, Applications Programmer, SCCN/INC/UCSD,
239+
<cll008@eng.ucsd.edu>
240+
**Scott Makeig**, Director, SCCN/INC/UCSD, <smakeig@ucsd.edu>
241+
242+
</div>
243+

‎plugins/roiconnect/index.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ nav_order: 8
88
---
99
To view the plugin source code, please visit the plugin's [GitHub repository](https://github.com/sccn/roiconnect).
1010

11-
[![GitHub stars](https://img.shields.io/github/stars/arnodelorme/roiconnect?color=green&logo=GitHub)](https://github.com/arnodelorme/roiconnect/issues) [![GitHub issues](https://img.shields.io/github/issues/arnodelorme/roiconnect?color=%23fa251e)](https://github.com/arnodelorme/roiconnect/issues) [![GitHub pulls](https://img.shields.io/github/issues-pr-raw/arnodelorme/roiconnect)](https://github.com/arnodelorme/roiconnect/pulls) [![GitHub forks](https://img.shields.io/github/forks/arnodelorme/roiconnect?style=social)](https://github.com/arnodelorme/roiconnect/forks) [![GitHub contributors](https://img.shields.io/github/contributors/arnodelorme/roiconnect?style=social)](https://github.com/arnodelorme/roiconnect/contributors)
12-
1311
# What is ROIconnect?
1412

1513
ROIconnect is a freely available open-source plugin to [EEGLAB](https://github.com/sccn/eeglab) for EEG data analysis. It allows you to perform linear and nonlinear functional connectivity analysis between regions of interest (ROIs) on source level. The results can be visualized in 2-D and 3-D. ROIs are defined based on popular fMRI atlases, and source localization can be performed through LCMV beamforming or eLORETA. Connectivity analysis can be performed between all pairs of brain regions using Coherence-based methods, Granger Causality, Time-reversed Granger Causality, Multivariate Interaction Measure, Maximized Imaginary Coherency, Phase-amplitude coupling, and other methods. This plugin is compatible with Fieldtrip, Brainstorm and NFT head models.

‎plugins/viewprops/index.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,11 @@ nav_order: 24
88
---
99
To view the plugin source code, please visit the plugin's [GitHub repository](https://github.com/sccn/viewprops).
1010

11-
# Viewprops README
12-
Shows the same information as the original EEGLAB pop_prop() function with the addition of a scrolling IC activity viewer, percent channel variance accounted for (PVAF), dipole plot (if available), and component label (if available).
1311
![](Viewprops_eye.png)
1412

13+
# The Viewprops EEGLAB plugin
14+
Shows the same information as the original EEGLAB pop_prop() function with the addition of a scrolling IC activity viewer, percent channel variance accounted for (PVAF), dipole plot (if available), and component label (if available). This plugin is **associated with the ICLabel EEGLAB plugin**.
15+
1516
## Usage
1617
### Installation
1718
If you do not have Viewprops installed, install it through the EEGLAB plug-in manager. You can find this in the EEGLAB window under "File"->"Manage EEGLAB extensions"->"Data processing extensions". Once the new window opens, look through the list of plug-ins until you find the "Viewprops" plug-in and mark its checkbox on the left in the column labeled "Install". You will likely have to use the "Next page" button to progress through the list of plug-ins to find Viewprops. Click "Ok" to download the selected plug-ins. You only have to do this once.

0 commit comments

Comments
 (0)
Please sign in to comment.