Elsevier

Journal of Neuroscience Methods

Volume 318, 15 April 2019, Pages 69-77
Journal of Neuroscience Methods

A novel low-noise movement tracking system with real-time analog output for closed-loop experiments

https://doi.org/10.1016/j.jneumeth.2018.12.016Get rights and content

Highlights

  • A movement tracking system with low-noise analog and digital output is presented.

  • Latency is under 30 ms and jitter is under 4 ms.

  • Replacing physical sensors with software allows real-time behavioral control.

  • Pyramidal neurons in freely-moving mice are stimulated in a 2D place-field model.

Abstract

Background

Modern electrophysiological experiments are moving towards closing the loop, where the extrinsic (behavioral) and intrinsic (neuronal) variables automatically affect stimulation parameters. Rodent experiments targeting spatial behavior require animal 2D kinematics to be continuously monitored in a reliable and accurate manner. Cameras provide a robust, flexible, and simple way to track kinematics on the fly. Indeed, several available camera-based systems yield high spatiotemporal resolution. However, the acquired kinematic data cannot be accessed with sufficient temporal resolution for precise real-time feedback.

New method

Here, we describe a novel software and hardware system for movement tracking based on color-markers with real-time low-noise output that works in both light and dark conditions. The analog outputs precisely represent 2D movement features including position, orientation, and their temporal derivatives, velocity and angular velocity.

Results

Using adaptive windowing, contour extraction, and rigid-body Kalman filtering, a 640-by-360 pixel frame is processed in 28 ms with less than 4 ms jitter, for 100 frames per second. The system is robust to outliers, has low noise, and maintains a smooth, accurate output even when one or more markers are temporarily missing. Using freely-moving mice, we demonstrate novel applications such as replacing conventional sensors in a behavioral arena and inducing novel place fields via closed-loop optogenetic stimulation.

Comparison with existing method(s)

To the best of our knowledge, this is the first tracking system that yields analog output in real-time.

Conclusions

This modular system for closed-loop experiment tracking can be implemented by downloading an open-source software and assembling low-cost hardware circuity.

Introduction

Compared to manual administration of experiments, automated experimental designs have obvious advantages in terms of throughput, accuracy, reproducibility, experimenter effort, and robustness to human error. Furthermore, automated systems enable closed-loop experiments such as manipulating neuronal activity according to animal behavior (Wiener et al., 1989). In behavioral experiments, especially those targeting spatial memory and navigation, the most important extrinsic variables are the head location and orientation of the animal (Grieves et al., 2016; Moser et al., 2015; O’Keefe and Dostrovsky, 1971). Multiple approaches have been used to obtain these movement features including a grid of motion sensors (Opto-Varimex system; Columbus Instruments, USA) and piezoelectric sensors on the floor (Flores et al., 2007). Yet arguably the cheapest, most robust, and most widely-used method is recording with a single overhead camera. This approach can be used with or without markers attached to the subject (Maghsoudi et al., 2017; Mathis et al., 2018); clearly, markers increase target salience at the cost of limiting the number of targets and reducing freedom of movement. The camera-based technique has been used for over thirty years (Skaggs et al., 1998; Wiener et al., 1989), and multiple commercial (e.g. ANY-maze; CinePlex, Plexon; EthoVision, Spink et al., 2001; OptiTrack, NaturalPoint Inc. U.S; NetCom API, NeuraLynx, U.S.) and open source (e.g. Bonsai, Buccino et al., 2018; Lopes et al., 2015; DeepLabCut, Mathis et al., 2018; MouseMove, Samson et al., 2015; Pyper) systems are available. These tools are useful for behavioral logging and offline analyses but typically do not support any online functionality. Those that do, either enable only limited real-time output (collision of subject position with a given region-of-interest) or are tailored to a specific platform. In sum, presently-available tracking tools are suboptimal for low-latency closed-loop experiments that rely on detailed kinematics such as orientation, velocity, or combinations thereof.

In the present work, we developed a marker-based system (Fig. 1A) that provides accurate high-resolution (100 samples/s, 4.5 mm/pixel) real-time (28 ± 3 ms delay) feedback of animal position, orientation, and their temporal derivatives (velocity and angular velocity). These features are conveyed as real-time analog (0–5 V with 12-bit resolution) signals that can be easily integrated with other variables such as neuronal recordings and be used for closed-loop manipulations such as electrical or optogenetic stimulation. Furthermore, the system outputs digital signals to indicate whether the animal is within user-defined regions of interest. Implementing the system is simple, and involves downloading an open-source software and assembling low-cost hardware circuity.

The system (Fig. 1B) for tracking a subject (e.g. an animal) consists of three blocks: (1) a camera; (2) a PC that runs the software (“Spotter”) and controls the downstream hardware system; and (3) custom hardware (“Movement Controller”, MC), outputting low-noise digital and analog signals. In the implementation described here, a Digital Signal Processor (DSP; RX8, Tucker-Davis Technologies) and a precision Current Source (CS; Stark et al., 2012) are used for closing the loop by applying intra-cortical illumination, resulting in neuronal activation of the tracked rodent. In this use case, the rodent has an implanted head-stage with two color-markers (brightly-painted blobs/LEDs). While the system works well in both light and dark conditions, each marker must have a color that is clearly distinct from the background and from other markers. The software is a Python-based application that uses a modular structure consisting of two main parts: a command line application, and an optional graphical user interface (GUI).

To provide modularity, three distinct levels of tracking are defined: Markers, Objects, and Regions of Interest (ROIs). Markers are the elementary tracking units, representing a color blob or an empty contour such as an LED. Markers are defined by a set of four parameters (hue, saturation, value, and size) that are used by the detection algorithm (Algorithm 1). An Object is composed of linked Markers and has up to six features that can be routed to analog outputs: (1) x position; (2) y position; (3) orientation ; (4) speed ||v||; (5) movement direction ϕ; and (6) angular velocity θ˙. The first three features are first-order in the sense that they can be determined from a single frame, whereas the last three are second-order, based on the temporal derivative of multi-frame data (Fig. 1C). Note that θ and θ˙ are defined only for multi-marker objects.

Although the detection algorithm typically results in a highly accurate single-frame color-blob detection rate, continuity is not imposed, and thus multi-frame tracking is typically erratic. To account for these and other sources of noise, we designed an adaptive denoising and location estimation algorithm. The procedure (Algorithm 2) is based on the Kalman filter (Kalman, 1960), a real-time data fusion procedure that combines noisy measurements with estimates, resulting in smoother and more accurate output.

The last level of modularity is a Region of Interest (ROI), which is a part of the image described by one or more geometric shapes (circles, lines, and/or squares). Linking an ROI to an Object will continuously check for collision between Object location and the ROI. The real-time state of this check can be emitted as a binary (digital) output.

Section snippets

Results

The system can track in real-time (28 ± 3 ms delay) the simultaneous movement of up to four Objects (e.g. behaving animals) and maintain up to sixteen ROIs (see Application I below). With a camera, the software can work as a stand-alone logger that may record (and later play back) 2D kinematics. Integration with the MC yields real-time analog and digital outputs available for data acquisition (DAQ) and/or processing on the fly. The generic MC design (Fig. 7) allows scaling the number of tracked

The software: spotter

Spotter is a modular library written in Python 2.7. The software grabs a new frame from the camera utilizing OpenCV 2.4, an open-source machine vision library (Bradski, 2000). The frame then passes through a processing module; results are communicated via a serial interface to an external microcontroller. User input can be added through the command line or through the GUI, built with PyQt4 (Qt 4.8, The Qt Company).

Finding marker positions

Algorithm 1is based on color segmentation. It identifies user-defined Markers on

Discussion

We described a modular and flexible open-source software and hardware system that enables real-time (28 ± 3 ms delay, 100 fps) low-noise tracking of the 2D kinematics of multiple objects. The system was applied to the task of tracking a freely-moving rodent equipped with on-head LEDs. Using two adaptive algorithms, the system maintained stable object tracking even when one or both LEDs were obscured, and when noise and reflections were present in the field of view. This approach eliminates the

Author contributions

N.G. wrote the software, developed and built the hardware, developed Algorithm II, collected and analyzed data, and wrote the manuscript. R.E. developed Algorithm I, wrote the software, and developed the hardware. E.S. conceived and supervised the project, developed Algorithm II, built optoelectronic devices and implanted animals, analyzed data, and wrote the manuscript.

Competing financial interests

The authors have no competing financial interests to disclose.

Acknowledgements

We thank Lisa Roux and Leore Heim for testing the system and for providing feedback on its limitations. This work was funded by a CRCNS grant (#2015577) from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel, and the United States National Science Foundation (NSF); and by the Israel Science Foundation (ISF; grant #638/16).

References (21)

There are more references available in the full text version of this article.

Cited by (0)

View full text