Abstract
Setting up an experiment in behavioral neuroscience is a complex process that is often managed with ad hoc solutions. To streamline this process, we developed Rigbox, a high-performance, open-source software toolbox that facilitates a modular approach to designing experiments (https://github.com/cortex-lab/Rigbox). Rigbox simplifies hardware input-output, time aligns datastreams from multiple sources, communicates with remote databases, and implements visual and auditory stimuli presentation. Its main submodule, Signals, allows intuitive programming of behavioral tasks. Here we illustrate its function with the following two interactive examples: a human psychophysics experiment, and the game of Pong. We give an overview of running experiments in Rigbox, provide benchmarks, and conclude with a discussion on the extensibility of the software and comparisons with similar toolboxes. Rigbox runs in MATLAB, with Java components to handle network communication, and a C library to boost performance.
Significance Statement
Configuring the hardware and software components required to run a behavioral neuroscience experiment and manage experiment-related data are a complex process. In a typical experiment, software is required to design a behavioral task, present stimuli, read hardware input sensors, trigger hardware outputs, record subject behavior and neural activity, and transfer data between local and remote servers. Here we introduce Rigbox, which, to the best of our knowledge, is the only software toolbox that integrates all of the aforementioned software requirements necessary to run an experiment. This MATLAB-based package provides a platform to rapidly prototype experiments. Multiple laboratories have adopted this package to run experiments in cognitive, behavioral, systems, and circuit neuroscience.
Introduction
In behavioral neuroscience, much time is spent setting up hardware and software, and ensuring compatibility between them. Experiments often require configuring disparate software to interface with distinct hardware, and integrating these components is no trivial task. Furthermore, there are often separate software components for designing a behavioral task, running the task, and acquiring, processing, and logging the data. This requires learning the fundamentals of different software packages and how to make them communicate appropriately.
Consider a typical experiment focused on decision-making, in which a subject chooses a stimulus among a set of possibilities and obtains a reward if the choice was correct (Carandini and Churchland, 2013). The software setup for this experiment may seem simple: ostensibly, all that is required is software to run the behavioral task, and software to handle experiment data. However, when considering implementation details for these two types of software, the setup can grow quite complex. Running the behavioral task requires software for starting, stopping, and transitioning between task states, presenting stimuli, reading input devices, and triggering output devices. Handling experiment data requires software for acquiring, processing, and logging stimulus history, response history, and subject physiology, and transferring data between servers and databases.
To address this variety of needs in a single software toolbox, we designed Rigbox (github.com/cortex-lab/Rigbox). Rigbox is modular, high-performance, open-source software for running behavioral neuroscience experiments and acquiring experiment-related data. Rigbox facilitates acquiring, time aligning, and managing data from a variety of sources. Furthermore, Rigbox allows users to programmatically and intuitively design and parametrize behavioral tasks via a framework called Signals.
We begin by giving a general overview of Signals, the core package of Rigbox. We illustrate the following two simple interactive examples of its use: an experiment in visual psychophysics, and the game of Pong. Next, we describe how Rigbox runs Signals experiments and manages experiment data. We then discuss the design considerations of Rigbox and the various types of experiments that have been implemented using Rigbox. Last, we detail the requirements of Rigbox and provide benchmarking results. If you wish to try out the code examples used in this paper, please install Rigbox by following the information in the github repository README file (http://github.com/cortex-lab/Rigbox).
Signals
Signals is a framework designed for building bespoke behavioral tasks. In Signals, an experiment is built from a reactive network whose nodes (“signals”) represent experiment parameters. This simplifies problems that deal with how experiment parameters change over time by representing relationships between these parameters with straightforward, self-documenting operations. For example, to define a drifting grating, a user could create a signal that changes the phase of a grating as a function of time (Fig. 1). This is shown in the following code below:
theta = 2 * π; % angle of phase in radians
freq = 3; % frequency of phase in Hz
stimulus.phase = theta * freq * t; % phase that cycles at 3 Hz for given stimulus
Whenever the clock signal, t, is updated (e.g., by a MATLAB timer callback function), the values of all its dependent signals are then recalculated asynchronously via callbacks. This paradigm is known as functional reactive programming (see D. Lew, “An Introduction to Functional Reactive Programming,” https://blog.danlew.net/2017/07/27/an-introduction-to-functional-reactive-programming/).
The operations that can be performed on signals are not just limited to basic arithmetic. Many built-in MATLAB functions (including logical, trigonometric, casting, and array operations) have been overloaded to work on signals as they would on basic numeric or char types. Furthermore, a number of classical functional programming functions (e.g., “map” and “scan”) can be used on signals (Fig. 2). These allow signals to gate, trigger, filter, and accumulate other signals to define a complete experiment.
Example 1: a psychophysics experiment
Our first example of a human-interactive Signals experiment is a script that recreates a psychophysics experiment to study the mechanisms that underlie the discrimination of a visual stimulus (Ringach, 1998). In this experiment, the observer looks at visual gratings (Fig. 3a) that change rapidly and randomly in orientation and phase. The gratings change so rapidly that they summate in the visual system, and the observer tends to perceive two or three of them as superimposed. The task of the observer is to hit the “ctrl” key whenever the orientation of the grating is vertical. At key press, the probability of detection is plotted as a function of stimulus orientation in the recent past. Typically, this exposes a center-surround type of organization, with orientations near vertical eliciting responses, but orientations further away suppressing responses (Fig. 3b). The Signals network representation of this experiment is shown in Figure 4.
To run this experiment, simply run ringach98 script https://github.com/cortex-lab/signals/blob/master/docs/examples/ringach98.m after installing Rigbox and press the “Play” button. Below is a breakdown of the 30-odd lines of code:
First, some constants are defined:
oris = 0:18:162; % set of orientations, deg
phases = 90:90:360; % set of phases, deg
presentationRate = 10; % Hz
winlen = 10; % length of histogram window, frames
Next, we create a figure:
figh = figure(‘Name’, ‘Press “ctrl” key on vertical grating’,…
‘Position’, [680 250 560 700], ‘NumberTitle’, ‘off’);vbox = uix.
VBox(‘Parent’, figh); % container for the play/pause button and axes% Create axes for the histogram plot.
axh = axes(‘Parent’, vbox, ‘NextPlot’, ‘replacechildren’, ‘XTick’, oris);
xlabel(axh, ‘Orientation’);
ylabel(axh, ‘Time (frames)’);
ylim([0 winlen] + 0.5);
vbox.Heights = [30 −1]; % 30 px for the button, the rest for the plot
Next, we create our Signals network. The function playgroundPTB creates a new Signals network and one input signal, t. It creates a start button, which, when pressed, starts a MATLAB timer that periodically updates t with the time. Finally, it returns an anonymous function, setElemsFn, that, when called with a visual stimulus object, adds the textures to a stimulus renderer:
% Create a new Psychtoolbox stimulus window and renderer, returning a timing
% signal, ‘t’, and function, ‘setElemsFn’, to load the visual elements.
[t, setElemsFn] = sig.test.playgroundPTB(vbox);
net = t.Node.Net; % handle to the network
Now, we derive some new signals from UI key press events and the clock signal:
% Create a signal from the key board presses.
keyPresses = net.fromUIEvent(figh, ‘WindowKeyPress Fcn’);
% Filter it, keeping only ‘ctrl’ key presses. Turn into logical signal.
reports = strcmp(keyPresses.Key, ‘ctrl’);
% Sample the current time at ‘presentationRatè.
sampler = skipRepeats(floor(presentationRate * t));
To change the orientation and phase at a given frequency, we derive some indexing signals that will select a value from the orientation and phase sets. The map method calls a function with the value of a signal each time it changes. @(∼) foo is the MATLAB syntax for creating an anonymous function. Each time the sampler signal changes, a new random integer is generated.
% Randomly sample orientations and phases by generating new indices for selecting values from ‘oris’ and ‘phases’ each time ‘sampler’ updates.
oriIdx = sampler.map(@(∼) randi(numel(oris))); % index for ‘oris’ array
phaseIdx = sampler.map(@(∼) randi(numel(phases))); % index for `phases` array
currPhase = phaseIdx.map(@(idx) phases(idx)); % get current phase
currOri = oriIdx.map(@(idx) oris(idx)); % get current ori
Next, we derive some signals for updating our plot of reaction times. First, a Boolean array the size of our orientation set is created, then we derive a matrix from these vectors, storing the last 10 orientations presented.
% Create a signal to indicate the current orientation (a Boolean column vector)
oriMask = oris’ == currOri;
% Record the last few orientations presented (i.e., ‘buffer’ the last few values that ‘oriMask’ has taken.) as a MxN matrix where M is the number of orientations (the length of ‘oris’) and N is the number of frames (‘winlen’)
oriHistory = oriMask.buffer(winlen);
Each time the user presses the “ctrl” key (represented by the reports signal), the values in the oriHistory matrix are added to the histogram via the scan method, which initializes the histogram with zeros.
% After each keypress, add the `oriHistory` snapshot to an accumulating histogram.
histogram = oriHistory.at(reports).scan(@plus, zeros(numel(oris), winlen));
Now, each time the histogram updates, we call imagesc with its value, updating the plot axes.
% Plot histogram surface each time it changes.
histogram.onValue(@(data) imagesc(oris, 1:winlen, flipud(data’), ‘Parent’, axh));
Finally, we create the visual stimulus signal and send it to the renderer. The vis.grating function returns a subscriptable signal, which has parameter fields related to visual grating properties. When the values of these signal fields are updated, the underlying textures are rerendered by setElemsFn.
% Create a Gabor with changing orientations and phases.
grating = vis.grating(t, ‘sinusoid’, ‘Gaussian’);
grating.show = true; % set grating to be always visible
grating.orientation = currOri; % assign orientation
grating.phase = currPhase; % assign phase
grating.spatialFreq = 0.2; % cyc/deg
% Add the grating to the renderer.
setElemsFn(struct(‘grating’, grating));
With this powerful framework, a user can easily define complex relationships between stimuli, actions, and outcomes to create a complete experiment protocol. This protocol takes the form of a user-written MATLAB function, which we refer to as an “experiment definition” (“exp def”).
When Rigbox initializes an experiment, a new Signals network is created with input layer signals representing time, experiment epochs (e.g., new trials), and hardware input devices (e.g., position sensors). These input signals are passed into the exp def function, and the code in the exp def operates on these signals to create new signals that are added to the network (Fig. 5). The exp def is called just once to set up this network.
At experiment start, values are posted to the input signals of the network. During experiment runtime, these input signals are continuously updated within the main of the experiment while loop or through user interface UI and timer callbacks occur. For example, a position sensor input device may be read from continuously in a while loop to update the signal representing this device. These input signal updates asynchronously propagate to the dependent signals that were created in the exp def. The experiment ends when the “experiment stop” signal is updated (i.e., when all trial conditions have occurred or after a specified duration of time).
The following is a brief overview of the structure of an exp def. An exp def takes up to seven input arguments:
function expDef(t, events, params, visStim, inputs, outputs, audio)
In order, these are as follows: (1) the clock signal; (2) an events structure containing signals that define experiment epochs, and signals—from those created within the exp def—which the experimenter wishes to log; (3) a signal parameters structure that defines session- or trial-specific signals whose values can be changed directly within a graphical UI (GUI) before starting an experiment—signal parameter defaults are set within the exp def, and parameter sets can be saved and loaded across subjects and experiments; (4) the visual stimuli handler, which contains as fields all signals that parametrize the display of visual stimuli—any visual stimulus signal can be assigned various elements, which the viewing model allows to be defined in visual degrees, for being rendered to a screen, and a visual stimulus can be loaded directly from a saved image file; (5) an inputs structure containing signals that map to hardware input devices; (6) an outputs structure containing signals that map to external hardware output devices; and (7) the audio stimuli handler, which can contain as fields signals that map to available audio devices.
Tutorials on creating an exp def, examples of exp defs, and standalone scripts (including those mentioned in this article), and an in-depth overview of Signals can be found in the signals/docs folder https://github.com/cortex-lab/signals/tree/master/docs.
Example 2: Pong
A second human-interactive Signals experiment contained in the Rigbox repository is an exp def that runs the classic computer game Pong (Fig. 6). The signal that sets the player’s paddle position is mapped to the optical mouse. The epoch structure is set so that a trial ends on a score, and the experiment ends when either the player or cpu (central processing unit) reaches a target score. The code is divided into the following three sections: (1) initializing the game; (2) updating the game; and (3) creating visual elements and defining exp def signal parameters. To run this exp def, follow the directions in the header of the signalsPong file https://github.com/cortex-lab/signals/blob/master/docs/examples/signalsPong.m. Because the file itself (including copious documentation) is >300 lines, we will share only an overview here; however, readers are encouraged to look through the full file at their leisure.function signalsPong(t, events, p, visStim, inputs, outputs, audio)
In this first section, we define constants for the game, arena, ball, and paddles:
%% Initialize the game
% how often to update the game in secs
[…]
% initial scores and target score
[…]
% size of arena, ball, and paddle: [w h] in visual degrees
[…]
% ball angle, and ball velocity in visual degrees per second
[…]
% cpu and player paddle X-axis positions in visual degrees
[…]
The helper function, getYPos, returns the y-position of the cursor, which will be used to set the player paddle:function yPos = getYPos()
[…]end
% get cursor’s initial y-positioncursorInitialY = events.expStart.map(@(∼) getYPos);
In the second section, we define how the ball and paddle interactions update the game:
%% Update game
% create a signal that will update the y-position of the player’s paddle using ‘getYPos
‘playerPaddleYUpdateVal = (cursor.map(@(∼)get YPos)-cursorInitialY)*cursorGain
% make sure the y-value of the player’s paddle is within the screen bounds,
playerPaddleBounds = cond(…
playerPaddleYUpdateVal > arenaSz(2)/2, arenaSz(2)/2, …
playerPaddleYUpdateVal < -arenaSz(2)/2, -arenaSz(2)/2, …
true, playerPaddleYUpdateVal);
% and only updates every `tUpdatè secs
playerPaddleY = playerPaddleBounds.at(tUpdate);
% Create a struct, `gameDataInit`, holding the initial game state
gameDataInit = struct;
…
% Create a subscriptable signal, `gameDatà, whose fields represent the current
% game state (total scores, etc.), and which will be updated every `tUpdatè secs
gameData = playerPaddleY.scan(@updateGame, gameDataInit).subscriptable;
The helper function, updateGame, updates gameData. Specifically, it updates the data structure with ball angle, velocity, position, cpu paddle position, and player and cpu scores, based on the current ball position, which is updated at each sampled timestep, as follows:function gameData = updateGame(gameData, playerPaddleY)
[…]end
% define trial end (when a score occurs)
anyScored = playerScore | cpuScore;
events.endTrial = anyScored.then(true);
% define game end (when player or cpu score reaches target score)
endGame = (playerScore == targetScore) | (cpuScore == targetScore);
events.expStop = endGame.then(true);
[…]
In the final section, we create the visual elements representing the arena, ball, and paddles, and define the exp def signal parameters, as follows:
%% Define the visual elements and the experiment signal parameters
% create the arena, ball, and paddles as ‘vis.patch’ subscriptable signals
arena = vis.patch(t, ‘rectangle’);
ball = vis.patch(t, ‘circle’);
ball.color = p.ballColor;
playerPaddle = vis.patch(t, ‘rectangle’);
cpuPaddle = vis.patch(t, ‘rectangle’);
% assign the arena, ball, and paddles to the ‘visStim’ subscriptable signal handlervis
Stim.arena = arena;
visStim.ball = ball;
visStim.playerPaddle = playerPaddle;
visStim.cpuPaddle = cpuPaddle;
% define parameters that will be displayed in the GUI
try
% ‘p.ballColor’ is a conditional signal parameter: on any given trial, the ball
% color will be chosen at random among three colors: white, red, blue
p.ballColor = [1 1 1; 1 0 0; 0 0 1]’; % RGB color vector array
% ‘p.targetScorè is a global signal parameter: it can be changed via the GUI used
% to run this exp def before starting the game
p.targetScore = 5;
catch
end
Running experiments and managing data in Rigbox
Rigbox contains a suite of packages for interfacing with hardware, acquiring and managing data, communicating with a remote database, time aligning events from a variety of sources, and implementing a user interface for managing experiments.
Rigbox simplifies experiments by providing an abstract interface for hardware interactions. All hardware devices, including screens and speakers, are represented by abstract classes that provide a basic set of interface methods. Methods for initializing, configuring, and communicating with a particular device are handled by specific subclasses. This design choice avoids the creation of device-specific dependencies within the toolbox and the experiment code of the user. In this way, hardware devices can be swapped without modifying code or affecting the experiment workflow, and adding support for new devices is straightforward. For example, to support a new multifunction input/output (I/O) device (e.g., an Arduino or other microcontroller), one could simply extend the +hw/DaqController class, and to support a new hardware input sensor (e.g., a lever or joystick), one could simply subclass the +hw/PositionSensor class.
Intuitive and robust data management is another essential feature of Rigbox. Simple function wrappers save and locate data via human-readable experiment reference strings that reflect straightforward experiment directory structures (i.e., subject/date/session). Data can be saved both locally and remotely, and even distributed across multiple servers. Rigbox uses a single paths config file, making it simple to change the location of data and configuration files. Furthermore, this code can be easily integrated with the personal code of a user to generate read and write paths for arbitrary datasets. A Parameters class, which sets, validates, and assorts experiment conditions for each experiment, simplifies data analysis across experiments by standardizing parameterization. Rigbox can also communicate with an Alyx database to query and post data related to a subject or session. Alyx is a lightweight meta-database that can be hosted on an internal server or in the cloud (e.g., via Amazon Web Services). Alyx allows users to organize experiment sessions and their associated files, and to keep track of subject information, such as diet, breeding, and surgeries (Bonacchi et al., 2020).
Experiments typically involve recording simultaneously from many devices, and temporal alignment of these recordings can be challenging. Rigbox contains a class called Timeline, which manages the acquisition and generation of clocking pulses via a National Instruments multifunction I/O data acquisition device (NI-DAQ; Fig. 7). The main clocking pulse of Timeline, “chrono,” is a digital square wave sent out from the NI-DAQ that can flip each time a new chunk of data and are available to the NI-DAQ. A callback function to this flip event collects the NI-DAQ time stamp of the scan where the flip occurred. The difference between this time stamp and the system time recorded when the flip command was sent is recorded as an offset time. This offset time can be used to unify all event time stamps across computers: all event time stamps are recorded in time relative to chrono. A Timeline object can acquire any number of hardware or software events [e.g., from hardware inputs directly wired to the NI-DAQ, or UDP (user datagram protocol) messages sent from another computer] and record their values with respect to this offset. For example, a Timeline object can record when a reward valve or laser shutter is opened, a sensor is interacted with, or a screen displaying visual stimuli is updated. In addition to chrono, a Timeline object can also output TTL (transistor–transistor logic) and clock pulses for triggering external devices (e.g., to acquire frames at a specific rate).
Last, Rigbox provides an intuitive yet powerful user interface for running experiments. For this, two computers are required. An experiment is started from a GUI on one computer, referred to as the “Master Computer” (MC), which runs the experiment on a recording rig, referred to as the “Stimulus Computer” (SC; Fig. 8). An SC is responsible for stimuli presentation, rig hardware interaction, and data acquisition. The MC GUI is used to select, parameterize, and start experiments (Fig. 9). Customizable experiment panels can also be displayed within a different tab in the MC GUI to monitor experiments (Fig. 10). MC and SC communicate during runtime via TCP/IP (transmission control protocol/internet protocol; using WebSockets), and MC can communicate with multiple SCs simultaneously to run multiple experiments in parallel.
Instructions for installation and configuration can be found in the README file and the docs/setup folder https://github.com/cortex-lab/Rigbox/tree/master/docs/setup. This includes information on required dependencies, setting data repository locations, configuring hardware devices, and enabling communication between the MC and SC computers. Hardware and software requirements can also be found in the repository README and the Requirements and benchmarking section of this article.
Data availability
Rigbox is currently under active, test-driven development. All our code is open source, distributed under the Apache 2.0 license at https://github.com/cortex-lab/Rigbox (Extended Data 1), and we encourage users to contribute. Please see the contributing section of the README for information on contributing code and reporting issues. When using Rigbox to run behavioral tasks and/or acquire data, please cite this publication.
Extended Data 1
We recommend looking at and downloading the code directly from the github repository, at https://github.com/cortex-lab/Rigbox. Download Extended Data 1, ZIP file.
Discussion
In our laboratory, Rigbox is at the core of our operant, passive, and conditioning experiments. The principal behavioral task we use is a two-alternative forced choice visual stimulus discrimination task (Burgess et al., 2017). Using Rigbox, we have been able to rapidly prototype multiple variants of this task, including unforced choice, multisensory choice, behavior matching, and bandit tasks, using wheels, levers, balls, and lick detectors. The Signals exp defs for each variant act as a concise and intuitive record of the task design. In addition, Rigbox has made it easy to combine these tasks with a variety of recording techniques, including electrode recordings, 2-photon imaging, and fiber photometry; and neural perturbations, such as scanning laser inactivation and dopaminergic stimulation (Jun et al., 2017; Jacobs et al., 2018; Zatka-Haas et al., 2018; Shimaoka et al., 2019; Steinmetz et al., 2019; Armin et al., 2020). Rigbox has also enabled us to scale our behavioral training: because one MC can control multiple SCs, we run and manage many experiments simultaneously.
Often, experiments are iterative: task parameters are added or modified many times over, and finding an ideal parameter set can be an arduous process. Rigbox allows a user to develop and test an experiment without having to worry about boilerplate code and UI modifications, as these are handled by Rigbox packages in a modular fashion. Much of the code is object oriented with most aspects of the system represented as configurable objects. Given the modular nature of Rigbox, new features and hardware support may be easily added, provided there is driver support in MATLAB.
To the best of our knowledge, Rigbox is the most complete behavioral control software toolbox currently available in the neuroscience community; however, several other toolboxes implement similar features in different ways (Aronov and Tank, 2014; see also Bpod Wiki, https://sites.google.com/site/bpoddocumentation/home; BControl Behavioral Control System, https://brodywiki.princeton.edu/bcontrol/index.php?title=Main_Page; T. Akam, https://pycontrol.readthedocs.io/en/latest/; Table 1). Some of these toolboxes also include some features not currently available in Rigbox, for example, microsecond precision triggering of within-trial events, and creating 3D virtual environments. Indeed, the features used by a particular toolbox have advantages (and disadvantages) depending on the user’s desired experiment.
There are pros and cons to following different programming paradigms for software developers who decide how users will design behavioral tasks. Generally, three main paradigms exist: procedural, object oriented, and functional reactive. Here, in the context of programmatic task design, we briefly discuss the differences between these paradigms and in which scenarios one may be favored over the others. Note that here we discuss only the aspect of a toolbox that deals with behavioral task design, not the overall structure of a toolbox (e.g., Rigbox is built on an object-oriented paradigm, but Signals provides a functional reactive paradigm in which to implement a behavioral task).
A procedural approach to task design is probably the most familiar to behavioral neuroscientists. This approach focuses on “how to execute” a task by explicitly defining a control flow that moves a task from one state to the next. The Bcontrol, pyBpod, and pyControl toolboxes follow this paradigm by using real-time finite state machines (RTFSMs), which control the state of a task (e.g., initial state, reward, punishment) during each trial. Some advantages of this approach are that it is simple and intuitive, and guarantees event timing precision down to the minimum cycle of the state machine (e.g., Bcontrol RTFSMs run at a minimum cycle of 6 kHz). Some disadvantages of this approach are that the memory for task parameters are limited by the number of states in the RTFSM, and that the discrete implementation of states is not amenable to experiments that seek to control parameters continuously (e.g., a task that uses continuous hardware input signals).
Like the procedural approach to task design, an object-oriented approach also tends to be intuitive: objects can neatly represent the state of an experiment via datafields. Objects representing experimental parameters can easily pass information to each other and trigger experimental states via event callbacks. The VirMEn toolbox implements this approach by treating everything in the virtual environment as an object and having a runtime function update the environment by performing method calls on the objects based on input sensor signals from a subject performing a task. Some disadvantages of this approach are that the speed of experimental parameter updates are limited by the speed at which the programming language performs dynamic binding (which is often much slower than the RTFSM approach discussed above), and that operation “side effects” (which can alter an experiment’s state in unintended ways) are more likely to occur due to the emphasis on mutability, when compared with a pure procedural or functional reactive approach.
By contrast, Signals follows a functional reactive approach to task design. As we have seen, some advantages of this approach include simplifying the process of updating experiment parameters over time, endowing parameters with memory, and facilitating discrete and continuous event updates with equal ease. In general, a task specification in this paradigm is declarative, which can often make it clearer and more concise than in other paradigms, where control flow and event handling code can obscure the semantics of the task. Some disadvantages are that it suffers from similar speed limitations as in an object-oriented approach, and programmatically designing a task in a functional reactive paradigm is probably unfamiliar to most behavioral neuroscientists. When initially thinking about how a functional reactive network runs a behavioral task, it may be helpful to think of experiment parameters as nodes in the network that get updated via callbacks; there are no procedural calls to the network during experiment runtime.
When considering the entire set of behavioral tasks, no single programming paradigm is perfect, and it is therefore important for a user to consider the goals for the implementation of their task accordingly.
Hardware requirements
For most experiments, typical, contemporary, factory-built desktops running Windows 10 with dedicated graphics cards should suffice. Specific requirements of an SC will depend on the complexity of the experiment. For example, running an audiovisual integration task on three screens requires quality graphics and sound cards. SCs may additionally require a multifunction I/O device to communicate with external rig hardware, of which only NI-DAQs (e.g., NI-DAQ USB 6211) are currently supported.
Below are some minimum hardware specifications required for computers that run Rigbox:
CPU: 4 logical processors @ 3.0 GHz base speed (e.g., Intel Core i5-6500)
RAM: DDR4 16 GB @ 2133 MHz (e.g., Corsair Vengeance 16 GB)
GPU: 2 GB @ 1000 MHz base and memory speed (e.g., NVIDIA Quadro P400).
Software requirements
Similar to the hardware requirements, software requirements for an SC will depend on the experiment. For example, if acquiring data through a NI-DAQ, the SC will require the MATLAB NI-DAQmx support package in addition to the following minimum requirements:
OS: 64 Bit Windows 7 (or later)
Libraries: Visual C++ Redistributable Packages for Visual Studio 2013 and 2015
MATLAB: 2018b or later, including the Data Acquisition Toolbox
• Community MATLAB toolboxes:
∘ GUI Layout Toolbox (version 2 or later)
∘ Psychophysics Toolbox (version 3 or later).
Benchmarking
Fast execution of experiment runtime code is crucial for performing and accurately analyzing results from a behavioral experiment. Here we provide benchmarking results for the Signals framework. We include results for individual operations on a signal and for operations that propagate through each signal in a network. Single built-in MATLAB operations and Signals-specific methods are consistently executed in the microsecond range (Fig. 11). The network used in a typical two-alternative unforced stimulus discrimination task (https://github.com/cortex-lab/signals/blob/558170702aa2e6962c58f8b2a7b603a96b2c6b1a/docs/examples/advancedChoiceWorld.m) contains 338 signals spread over 10 layers; a similar network of 350 signals spread over 20 layers can update all signals in under 5 ms, and a network of 120 signals spread over 20 layers can update all signals with submillisecond precision (Fig. 12). Last, we include results for reading from and triggering hardware devices in the above-mentioned stimulus discrimination task.
Updates of the position of a rotary encoder used to indicate choice typically took <2 ms, the time between rendering and displaying the visual stimulus typically took <15 ms, and the delay between triggering and delivering a reward was typically under 0.2 ms (Fig. 13).
All of the results in the Benchmarking section were obtained from running MATLAB 2018b on a Windows 10 64 bit OS with an Intel core i7 8700 processor and 16 GB DDR4 dual-channel RAM clocking at a double data rate of 2133 MHz. Because single executions of signals operations were too quick for MATLAB to measure precisely, we repeated operations 1000 times and divided the MATLAB returned measured time by 1000. The Performance Testing Framework in MATLAB 2018b was used to obtain these results. https://github.com/cortex-lab/signals/blob/558170702aa2e6962c58f8b2a7b603a96b2c6b1a/tests/Signals_perftest.m contains the code used to generate the results shown in Figures 11 and 12; https://github.com/cortex-lab/signals/tree/558170702aa2e6962c58f8b2a7b603a96b2c6b1a/tests/results contains a table of these data; and https://github.com/cortex-lab/signals/tree/558170702aa2e6962c58f8b2a7b603a96b2c6b1a/tests/results contains the data used to generate the results shown in Figure 13A. National Instruments USB-6211 was used as the data acquisition I/O device.
Acknowledgments
Acknowledgments: We thank Andy Peters, Nick Steinmetz, Max Hunter, Peter Zatka-Haas, Kevin Miller, Hamish Forrest, and other members of the laboratory for troubleshooting, feedback, inspiration, and code contribution.
Footnotes
The authors declare no competing financial interests.
This work was funded by the Medical Research Council (Doctoral Training Award to C.P.B.), the Royal Society (Newton International Fellowship to A.J.P.), EMBO (Fellowship to A.J.P.), the Human Frontier Science Program (Fellowship to A.J.P.), and the Wellcome Trust (Grant 205093 to M.C. and K.D.H.). M.C. holds the GlaxoSmithKline/Fight for Sight Chair in Visual Neuroscience, University College London.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.