Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

Real-Time Selective Markerless Tracking of Forepaws of Head Fixed Mice Using Deep Neural Networks

Brandon J. Forys, Dongsheng Xiao, Pankaj Gupta and Timothy H. Murphy
eNeuro 14 May 2020, 7 (3) ENEURO.0096-20.2020; DOI: https://doi.org/10.1523/ENEURO.0096-20.2020
Brandon J. Forys
1Department of Psychiatry, Kinsmen Laboratory of Neurological Research, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
2Department of Psychology, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Brandon J. Forys
Dongsheng Xiao
1Department of Psychiatry, Kinsmen Laboratory of Neurological Research, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Pankaj Gupta
1Department of Psychiatry, Kinsmen Laboratory of Neurological Research, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Timothy H. Murphy
1Department of Psychiatry, Kinsmen Laboratory of Neurological Research, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Timothy H. Murphy
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Movies
  • Extended Data
  • Figure1
    • Download figure
    • Open in new tab
    • Download powerpoint
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Automated labeling of paws using DeepLabCut for near real-time tracking. A) A comparison of human labels (circles) and DeepLabCut’s automated labels (crosses) for both paws across four different mice in the testing set of the model. B) The loss of our model converged near zero after approximately 30,000 iterations.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Outline of the acquisition and processing pipeline for selective paw tracking and behavioral triggering. A) Frames are acquired from the camera in batches of two at a time, to enable the direct comparison of movement from one frame to the next. The two frames are passed to a processing thread B) using Python’s _thread library to allow this batch of two frames to be processed simultaneously with the other frames. C) The frames are analyzed for locations of the body parts identified in the user-trained DeepLabCut machine learning model using GPU acceleration, outputting the coordinates of the paw positions. D) The positions of each of the four paws on the mouse’s toes are averaged across each paw; this averaged paw position is used to compare the paw position between the first and second frames of the two-frame batch. If the absolute vertical movement of the left paw equals or exceeds 5 pixels while the vertical movement of the right paw does not exceed 10 pixels, a signal is sent to the GPIO breakout board to trigger the LED. While each frame – labelled with the predicted body part locations – is saved in a separate thread E) to limit computational load on the main thread, the LED feedback itself is sent in another thread F) to ensure that the feedback can be delivered asynchronously relative to the rest of the code. G) The LED, which is attached to the head-fixing pole above the mouse’s head, is illuminated, demonstrating the procedure of feedback to the animal. On training trials, a water reward is also delivered to the animal at this time.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Selective paw movements tracked with short latency and coincident with behavioral feedback triggering. A) Magnitude of movement in the vertical direction for the mouse’s left and right paws across one trial, recorded with an input frame rate of 220 Hz and having an output frame rate of 70 Hz. The dotted vertical lines represent times in the trial when the mouse’s left paw movements exceeded the threshold, triggering feedback via the LED. B) Detail of the mouse’s vertical paw movements over a period of 280 frames. The solid black lines indicate the time at which the frame during which the mouse’s movements crossed the threshold was sent from the camera; the red region and lighter grey line represent the time between frame acquisition and the LED being triggered as feedback. C) Magnitude of movement in the vertical direction for the mouse’s left and right paw for each trigger in the trial depicted in A), averaged for each frame before and after the trigger. Each solid blue line represents a single trigger at the actual time of the trigger, depicting the delay between the time at which the frame containing the above-threshold movement was received and the time at which the command was sent to activate the LED trigger. D) Dynamics of output frame rate for each second during the trial; the background curve is a Loess curve showing the smoothed average frame rate at that moment. E) Time of each behavioral trigger during the trial, plotted against the delay between the time at which the frame containing the above-threshold movement was received and the time at which the command was sent to activate the LED trigger. Note: For all plots with movement distance in mm, this distance in mm was calculated by measuring the width of the floor of the mouse’s tube and cross-referencing this distance with the width of the floor of the mouse’s tube in a frame recorded from this trial, in pixels. For plot C) the time elapsed in milliseconds is an approximate measure because of slight variations in the number of frames processed per second across the trial (as highlighted in D).

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Example of contrast changes from LED flash during one trial. A) Example of contrast changes from LED flash during one trial. Y axis represents magnitude of contrast level in a user-defined region of interest surrounding the LED relative to baseline, averaged across all trials in which contrast increased by at least three standard deviations above baseline. B) Example of all contrast changes from LED flash during one trial. Y axis represents magnitude of contrast level in a user-defined region of interest surrounding the LED relative to baseline, averaged across all trials in which contrast increased by at least three standard deviations above baseline. Each blue tick (“trigger”) represents a time at which the mouse made a vertical left-paw movement greater than or equal to 5 px while the right paw did not move more than 10 px vertically.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Average number of real-time behavioral triggers on baseline and training trials. Error bars represent standard errors of the mean. A) Average number of triggers represents the average number of times, per trial, that animals made a left paw movement greater than or equal to 5 pixels while not simultaneously making a right paw movement greater than 10 pixels. B) Average number of triggers represents the average number of times, per trial, that animals made a right paw movement greater than 10 pixels on the frame immediately after the animal made a left paw movement greater or equal to 5 pixels (while not simultaneously making a right paw movement greater than 10 pixels). On training trials, each trigger was associated with a water reward and flash of the red feedback LED. On baseline trials, each trigger was only associated with a flash of the red feedback LED. We used paired-samples t-tests that were Bonferroni corrected for multiple comparisons to evaluate between-day differences in average trigger count during training on the first day to average trigger counts across all training sessions on all other days. We also used these tests to evaluate within-day differences in average trigger count between training and baseline trials on each day.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Average number of reaches above threshold on baseline and training trials by digit. A) The average number of vertical left-paw digit movements ≥5 px by digit on training and baseline trials. B) The average number of vertical right-paw digit movements > 10 px by digit on training and baseline trials. Digits are numbered from the rightmost to leftmost digit (from the mouse’s perspective) on each forepaw. We used ANOVAs to evaluate differences in the average number of vertical left-paw digit movements ≥5 px on training and baseline trials per day, and for the average number of vertical right-paw digit movements > 10 px on training and baseline trials per day.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Estimation statistics for the mean difference in trigger count between training and baseline trials on each day and between training trials on each day. A) Slopegraph of mean difference in trigger count by animal between training and baseline trials, and the bootstrapped 95% confidence interval around the mean difference. B) The bootstrapped 95% confidence intervals around the mean differences in trigger count on training trials on each day minus the trigger count on training trials on day 1. C) The bootstrapped 95% confidence intervals around the mean differences in trigger count on baseline trials on each day minus the trigger count on baseline trials on day 1.

Tables

  • Figures
  • Movies
  • Extended Data
    • View popup
    Table 1

    A data table for all trials run in the first study

    Input frame rate (Hz)Mean output frame rate (Hz)SD output frame rate (Hz)Mean delay (ms)SD delay (ms)Mean accuracySD accuracyn
    9046.951.5334.345.940.970.0113
    18068.135.1039.8542.970.940.0213
    20070.177.0732.5610.700.950.0013
    22069.310.3333.8215.100.950.036
    27080.8318.2755.9355.690.940.0113
    30091.455.2760.6277.710.940.0111
    320109.7649.2453.2843.800.930.049
    • Input frame rate, the streaming frame rate set in the software for our camera, representing the frame rate of the camera without any additional analyses; mean output frame rate, the mean number of frames per second processed by our system across all recordings at each input frame rate; SD output frame rate, the SD in the number of frames per second processed by our system across all recordings at each input frame rate; mean delay, the mean amount of time, in milliseconds, between the system receiving a frame that contains a left paw movement exceeding 5 pixels of vertical movement and the system sending a signal to the breakout board to trigger the LED to provide feedback for that movement; SD delay, the SD, in milliseconds, of the delay discussed above; mean accuracy, the mean output of the sigmoid function by TensorFlow for all body parts for all trials at each frame rate; SD accuracy, the SD of the output of the aforementioned sigmoid function; n, the number of trials recorded at each frame rate.

    • View popup
    Table 2

    A data table for all trials run in the second study

    DayInput framerate (Hz)Mean outputframe rate (Hz)SD outputframe rate (Hz)Mean delay(ms)SD delay(ms)MeanaccuracySDaccuracyn
    120065.000.5337.0010.300.990.0021
    220066.7018.2639.5010.810.940.0120
    320066.050.6529.697.010.900.0424
    420065.890.5731.037.500.950.0224
    520065.760.5430.968.340.950.0321
    • Input frame rate, the streaming frame rate set in the software for our camera, representing the frame rate of the camera without any additional analyses; mean output frame rate, the mean number of frames per second processed by our system across all recordings at each input frame rate; SD output frame rate, the SD in the number of frames per second processed by our system across all recordings at each input frame rate; mean delay, the mean amount of time, in milliseconds, between the system receiving a frame that contains a left paw movement exceeding 5 pixels of vertical movement and the system sending a signal to the breakout board to trigger the LED to provide feedback for that movement; SD delay, the SD, in milliseconds, of the delay discussed above; mean accuracy, the mean output of the sigmoid function by TensorFlow for all body parts for all trials at each frame rate; SD accuracy, the SD of the output of the aforementioned sigmoid function; n, the number of trials recorded at each frame rate.

Movies

  • Figures
  • Tables
  • Extended Data
  • Movie 1.

    This video presents a recording of a mouse's behavior during a training session on the fifth and final day of training. It was created from frames automatically labelled with the position of each digit on each forepaw that were saved during the session, displayed at a frame rate of 30 Hz for clarity. The frames represent 2.90 s of real time recording (at an average frame rate of 65.76 Hz). The lower of the two LEDs in the top right of the image illuminates at the same time as the delivery of a water reward through the spout in front of the mouse's mouth. When there is a set of movements in rapid succession, the LED does not flash for every single movement because it operates with a 300 ms refractory period. This period ensures that the mouse is not reinforced for making a very large quantity of small movements as opposed to discrete, larger movements.

Extended Data

  • Figures
  • Tables
  • Movies
  • Extended Data 1

    Supplementary LED analysis code. Download LED analysis code, ZIP file

  • Extended Data 2

    Supplementary DeepCut2RealTime code. Download DeepCut2RealTime code, ZIP file

Back to top

In this issue

eneuro: 7 (3)
eNeuro
Vol. 7, Issue 3
Month/Month
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Real-Time Selective Markerless Tracking of Forepaws of Head Fixed Mice Using Deep Neural Networks
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Real-Time Selective Markerless Tracking of Forepaws of Head Fixed Mice Using Deep Neural Networks
Brandon J. Forys, Dongsheng Xiao, Pankaj Gupta, Timothy H. Murphy
eNeuro 14 May 2020, 7 (3) ENEURO.0096-20.2020; DOI: 10.1523/ENEURO.0096-20.2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Real-Time Selective Markerless Tracking of Forepaws of Head Fixed Mice Using Deep Neural Networks
Brandon J. Forys, Dongsheng Xiao, Pankaj Gupta, Timothy H. Murphy
eNeuro 14 May 2020, 7 (3) ENEURO.0096-20.2020; DOI: 10.1523/ENEURO.0096-20.2020
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Abstract
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • closed loop
  • movement tracking
  • real-time tracking
  • DeepLabCut

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • A New Tool for Quantifying Mouse Facial Expressions
  • Validation of a New Coil Array Tailored for Dog Functional Magnetic Resonance Imaging Studies
  • Photothrombotic Middle Cerebral Artery Occlusion in Mice: A Novel Model of Ischemic Stroke
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • A New Tool for Quantifying Mouse Facial Expressions
  • Validation of a New Coil Array Tailored for Dog Functional Magnetic Resonance Imaging Studies
  • Photothrombotic Middle Cerebral Artery Occlusion in Mice: A Novel Model of Ischemic Stroke
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.