Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

Markerless Mouse Tracking for Social Experiments

Van Anh Le, Toni-Lee Sterley, Ning Cheng, Jaideep S. Bains and Kartikeya Murari
eNeuro 17 January 2024, 11 (2) ENEURO.0154-22.2023; https://doi.org/10.1523/ENEURO.0154-22.2023
Van Anh Le
1Electrical and Software Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Toni-Lee Sterley
2Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ning Cheng
2Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
3Faculty of Veterinary Medicine, University of Calgary, Calgary, AB T2N 1N4, Canada
4Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jaideep S. Bains
2Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jaideep S. Bains
Kartikeya Murari
1Electrical and Software Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
2Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
5Biomedical Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kartikeya Murari
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Snapshots taken from the videos illustrating 12 experimental setups used in the mouse tracking (MT) dataset. Details for each of the settings are in Table 1.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    The pipeline of the proposed algorithm for MT and feature detection after training is complete. Traditional segmentation (details in Extended Data Fig. 2-1) can be optionally used to reduce computational cost. User can select to bypass this and use Mask R-CNN exclusively. Our approach generates body masks and snout and tail-base coordinates. It can optionally be ensembled with deep-learning-based pose estimation techniques such as DLC or SLEAP for further improvement of snout and tail-base detection.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Training and evaluation of the Mask R-CNN. a, Example images with labels taken from the auto-training set. b, Example images with human annotations taken from the manual-training set. Extended Data Figure 3-1 compares human annotation required for all approaches. c, Mask, bounding box, and classification losses for fivefold cross-validation on the auto-training set in which the dataset was split into five sets, the model trained on four splits, and validated on the left-out split. d, Average precision metrics (AP, AP75, and AP50) of test data for three splits of training data vs. number of training images corresponding to 0, 20, 40, 60, and 80% of the manual-training set. e, Kernel density estimate of AP of the auto-trained model and dual-trained model groups on test data for the first split of the training set which account for 20–80% of the manual-training set. f, Visualization of the outputs of the auto-trained model and dual-trained model groups with 20, 40, 60, and 80% training fraction from the manual-training set (red) and human annotation (yellow). Each row shows performance on a different frame. Number pairs are predicted mask confidence and IoU.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Percentage of frames with segmentation errors over 12 videos in 4 categories in the MT dataset. The auto-trained model is built using the auto-training set that required no human effort and does not have mice in close proximity. The dual-trained model is a fine-tuned version of the auto-trained model incorporating manually segmented images of closely interacting mice from the manual-training set. Manual-trained model is trained only by the manual-training set.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Comparison of our approaches—MD and ensemble—with DLC and SLEAP. The upper panel shows average MOTA and the lower panel shows total instances of switched identities across all 12 videos in the 4 categories.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Performance across 12 videos in 4 categories. a, Fraction of frames with the mean distance between model predictions and human annotations below a varying threshold. b, Boxplots showing errors in MD, DLC, and ensemble models. Plots show median, 25th and 75th percentile, and outliers defined as >75th percentile +1.5 times the inter-quartile range. Text above outliers show number of outliers and average outlier within parentheses. Results for individual videos are shown in Extended Data Figures 6-1–6-3.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Examples of a, good and b, visibly inaccurate snout and tail-base detection in a group of three mice. c, Snout and tail-base trajectories for each animal along with a distribution of frame-to-frame movement in x and y coordinates.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Application of the markerless MT approach showing that loss of olfaction through ablation of the main olfactory epithelium impairs social recognition behaviors in mice. a, Mice were housed in same sex pairs; one mouse per pair was a control (intranasal saline-treated) or anosmic (intranasal ZnSO4-treated). b, Experimental paradigm for social recognition test: anosmic/control mice acted as “observers” (Obs) and were presented with either a familiar demonstrator (Dem) or an unfamiliar demonstrator. c, Left: behavioral ethograms of control (top) and anosmic (bottom) observers while interacting with a familiar or unfamiliar demonstrator. Each color indicates when the snout of the observer was directed toward the anogenital (blue), body (orange), or head/face (green) region of the demonstrator. Right: cumulative distributions of each of the social investigative behaviors for control (top) and anosmic (bottom) observers toward familiar (solid line) or unfamiliar (dotted line) demonstrators. d, Total number of social investigation events by control (top) and anosmic (bottom) observers directed toward the anogenital, body, or head/face region of familiar versus unfamiliar demonstrators. e, Average duration of each social investigation event. f, Interval between social investigation events. *p < 0.05; **p < 0.01; ***p < 0.001. Paired t-test comparing social investigation toward a familiar versus unfamiliar demonstrator. Analysis of demonstrator mice behavior in Extended Data Figure 8-1.

  • Figure 9.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9.

    Application of the markerless MT approach showing that anosmic mice spend more time in contact (non-snout-directed contact) with unfamiliar versus familiar demonstrators. a, Left: behavioral ethograms of control (top) and anosmic (bottom) observers while interacting with a familiar or unfamiliar demonstrator. Each brown bar indicates when the mice were in contact (touching) that was not as a result of snout-directed investigation by the observer or demonstrator. Right: cumulative distributions of touching in pairs of control (top) or anosmic (bottom) observers with familiar (solid line) or unfamiliar (dotted line) demonstrators. Bar graph shows total time spent touching during the first minute of the 5-min period. b, Total number of touching events between control (left) or anosmic (right) observers, with familiar or unfamiliar demonstrators. c, The average duration of each touching event. d, The inter-event interval between touching events. *p < 0.05. Paired t-test comparing touching between observers and familiar versus unfamiliar demonstrators.

  • Figure 10.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 10.

    Application of the markerless MT approach showing distance traveled and velocities of pairs of mice (observer + demonstrator). a, An example of raw tracking data of observers (red) and demonstrators (blue), where observers were control (top) or anosmic (bottom) with familiar (left) or unfamiliar demonstrators (right). b, Total distance traveled by observers and their familiar or unfamiliar demonstrators where the observer was control (top) or anosmic (bottom). c, Total time spent stationary by observers and their familiar or unfamiliar demonstrators where the observer was control (top) or anosmic (bottom). d, Examples of velocities (1 s bins) of observers (red) and demonstrators (blue) with familiar or unfamiliar demonstrators where the observer was control (top) or anosmic (bottom). Inset graphs show Pearson correlation analyses of observer velocities versus demonstrator velocities. Data for all animals are shown in Extended Data Table 10-1. b and c, *p < 0.05, one-way ANOVA with Sidak’s multiple comparisons test.

Tables

  • Figures
  • Extended Data
    • View popup
    Table 1.

    Details of the experimental settings used in the MT dataset

    VideoAnimal genotypeSexBeddingFood and waterImplant
    1C57BL/6, C57BL/6F/F–––/–
    2Crh-IRES-Cre::Ai14, C57BL/6F/F––Fiberoptic/–
    3C57BL/6, C57BL/6F/FYes––/–
    4Crh-IRES-Cre::Ai14, C57BL/6F/FYesYesFiberoptic/–
    5C57BL/6, C57BL/6M/FYesYes–/–
    6BTBR, BTBRM/FYes––/–
    7Crh-IRES-Cre::Ai14, C57BL/6F/FYes–Fiberoptic/–
    8C57BL/6, C57BL/6F/F–Yes–/–
    9Crh-IRES-Cre::Ai14, C57BL/6F/F–YesFiberoptic/–
    10C57BL/6, C57BL/6M/F–Yes–/–
    11C57BL/6, C57BL/6M/F––EEG/–
    12C57BL/6, C57BL/6M/M––EEG/EEG
    • View popup
    Table 2.

    Benchmarks for quantifying sniffing behaviors using a distance thresholding approach with snout, tail-base, and body mask coordinates

    Annotator 1 versus annotator 2Annotator 1 versus thresholding
    PrecisionRecallF1 scorePrecisionRecallF1 score
    Head-directed0.750.530.620.540.540.54
    Anogenital-directed0.721.000.840.860.680.76
    Body-directed0.820.860.840.740.880.80

Extended Data

  • Figures
  • Tables
  • Figure 2-1

    Foreground detection includes background subtraction, thresholding and a series of morphological operations: closing (9-pixel-radius circular structuring element) and opening (3-pixel radius circular structuring element). Download Figure 2-1, TIF file.

  • Figure 3-1

    Example human annotations required for training DLC (top row), SLEAP (middle row), and our approach (bottom row). While annotation styles are different, human effort and time for both is comparable. Download Figure 3-1, TIF file.

  • Figure 6-1

    Performance across videos 1-4. (a) Fraction of frames with the mean distance between model predictions and human annotations below a varying threshold. (b) Boxplots showing errors in MD, DLC and Ensemble models. Plots show median, 25th and 75th percentile and outliers defined as > 75th percentile + 1.5 times the inter-quartile range. Text above outliers show number of outliers and average outlier within parenthesis. Download Figure 6-1, TIF file.

  • Figure 6-2

    Performance across videos 5-8. (a) Fraction of frames with the mean distance between model predictions and human annotations below a varying threshold. (b) Boxplots showing errors in MD, DLC and Ensemble models. Plots show median, 25th and 75th percentile and outliers defined as > 75th percentile + 1.5 times the inter-quartile range. Text above outliers show number of outliers and average outlier within parenthesis. Download Figure 6-2, TIF file.

  • Figure 6-3

    Performance across videos 9-12. (a) Fraction of frames with the mean distance between model predictions and human annotations below a varying threshold. (b) Boxplots showing errors in MD, DLC and Ensemble models. Plots show median, 25th and 75th percentile and outliers defined as > 75th percentile + 1.5 times the inter-quartile range. Text above outliers show number of outliers and average outlier within parenthesis. Download Figure 6-3, TIF file.

  • Movie 1

    Output of SLEAP, multi-animal DLC and our approach (Mask R-CNN) on a simple uncluttered video (video #1 from MT dataset). Download Movie 1, MP4 file.

  • Movie 2

    Output of SLEAP, multi-animal DLC and our approach (Mask R-CNN) on a video with one animal with a fiberoptic tether (video #2 from MT dataset). Download Movie 2, MP4 file.

  • Movie 3

    Output of SLEAP, multi-animal DLC and our approach (Mask R-CNN) on a video with one animal with a fiberoptic tether and bedding in the arena (video #7 from MT dataset). Download Movie 3, MP4 file.

  • Figure 8-1

    Social investigation behaviors of familiar or unfamiliar demonstrators towards control or anosmic observers. Social investigation behaviors include when the snout of the demonstrator was directed towards the ano-genital, body, or head/face region of the observer. Cumulative distributions show that familiar (solid line) and unfamiliar (dotted line) demonstrators spend similar amount of time engaged in each social investigation behavior when with a control (top) or anosmic (bottom) observer. Download Figure 8-1, TIF file.

  • Table 10-1

    Velocities (calculated in 1 second bins as metres per second) correlated more frequently in control observer + demonstrator pairs (75%) than in anosmic observer + demonstrator pairs (25%). Download Table 10-1, DOCX file.

  • Movie 4

    Tracking, snout and tailbase locations returned by our approach for a video in our anosmia study with poorer illumination and contrast compared to MT dataset videos. Download Movie 4, MP4 file.

Back to top

In this issue

eneuro: 11 (2)
eNeuro
Vol. 11, Issue 2
February 2024
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Markerless Mouse Tracking for Social Experiments
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Markerless Mouse Tracking for Social Experiments
Van Anh Le, Toni-Lee Sterley, Ning Cheng, Jaideep S. Bains, Kartikeya Murari
eNeuro 17 January 2024, 11 (2) ENEURO.0154-22.2023; DOI: 10.1523/ENEURO.0154-22.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Markerless Mouse Tracking for Social Experiments
Van Anh Le, Toni-Lee Sterley, Ning Cheng, Jaideep S. Bains, Kartikeya Murari
eNeuro 17 January 2024, 11 (2) ENEURO.0154-22.2023; DOI: 10.1523/ENEURO.0154-22.2023
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • computer vision
  • deep learning
  • mouse tracking
  • social behavior

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • Reliable Single-trial Detection of Saccade-related Lambda Responses with Independent Component Analysis
  • Establishment of an Infrared-Camera-Based Home-Cage Tracking System Goblotrop
  • Automated Classification of Sleep–Wake States and Seizures in Mice
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • Reliable Single-trial Detection of Saccade-related Lambda Responses with Independent Component Analysis
  • Establishment of an Infrared-Camera-Based Home-Cage Tracking System Goblotrop
  • Automated Classification of Sleep–Wake States and Seizures in Mice
Show more Novel Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.