Meso-Py: Dual Brain Cortical Calcium Imaging in Mice during Head-Fixed Social Stimulus Presentation

Abstract We present a cost-effective, compact foot-print, and open-source Raspberry Pi-based widefield imaging system. The compact nature allows the system to be used for close-proximity dual-brain cortical mesoscale functional-imaging to simultaneously observe activity in two head-fixed animals in a staged social touch-like interaction. We provide all schematics, code, and protocols for a rail system where head-fixed mice are brought together to a distance where the macrovibrissae of each mouse make contact. Cortical neuronal functional signals (GCaMP6s; genetically encoded Ca2+ sensor) were recorded from both mice simultaneously before, during, and after the social contact period. When the mice were together, we observed bouts of mutual whisking and cross-mouse correlated cortical activity across the cortex. Correlations were not observed in trial-shuffled mouse pairs, suggesting that correlated activity was specific to individual interactions. Whisking-related cortical signals were observed during the period where mice were together (closest contact). The effects of social stimulus presentation extend outside of regions associated with mutual touch and have global synchronizing effects on cortical activity.


Introduction
Social interaction is a fundamental component of life across the animal kingdom.In mice, neurophysiological responses to social stimuli have been observed in many areas throughout the brain (Dölen et al., 2013;Gunaydin et al., 2014;Zhou et al., 2017;Rogers-Carter et al., 2018;Walsh et al., 2018;B. Guo et al., 2019;Sych et al., 2019;Tschida et al., 2019), but dorsal cortical circuits are relatively unexplored in this context.Widefield cortical calcium imaging samples neural activity across the entire dorsal cortex in vivo (Musall et al., 2019;Pinto et al., 2019;Gilad and Helmchen, 2020;Lohani et al., 2022;Matteucci et al., 2022), and may therefore present an opportunity to study mesoscale cortical circuit function during social behavior.However, finding neurophysiological correlates of interanimal interactions can be challenging because of the high degree of variability with which such interactions may occur.In this work, we employ a paradigm where cortical functional GCaMP (genetically encoded Ca 21 sensor) activity is imaged during staged, head-fixed interactions between mice.The head-fixed nature of the interaction allows for imaging of cortical activity across both interacting animals throughout the experiment, while reducing the complexity of possible interactions.Face to face interactions between mice synchronize cortical activity over spatial scales extending beyond single cortical areas and is not limited to regions primarily processing whisker/touch dependent signals.Moreover, we present detailed resources to help investigators set up a cost-effective, compact foot-print, and open-source Raspberry Pi-based imaging system that facilitates mesoscale headfixed imaging in multiple interacting animals.

Animals and experimental considerations
All animal procedures were performed in accordance with the Canadian Council on Animal Care and approved by the University of British Columbia animal care committee's regulations.Transgenic GCaMP6s tetO-GCaMP6s x CAMK tTA (Wekselblatt et al., 2016) were obtained from The Jackson Laboratory.All mice used in this study were males .60 d of age and housed in social housing (n ¼ 15 mice up to four mice/cage from 6 cages) with 12/12 h light/dark cycles and free access to food and water.One pair of Thy1-GFP mice (Feng et al., 2000) from the same cage were also tested as controls to determine whether hemodynamic changes confounded the epifluorescence signal.We did not employ female mice or male and female mouse pairs because of potential for variation across the estrous cycle that may alter social behavior.Based on previous work we do expect female mice to show barrel cortexdependent social interaction (Bobrov et al., 2014).

Surgical procedure
Chronic windows were implanted on male mice that were at least eight weeks old, as previously described (Silasi et al., 2016).Mice were anesthetized with 2% isoflurane in oxygen, and fur and skin were removed from the dorsal area of the head, exposing the skull over the entire two dorsal brain hemispheres.After cleaning the skull with a phosphate buffered saline solution, a titanium head-fixing bar was glued to the skull above l (Fig. 1a) and reinforced with clear dental cement (Metabond).A custom cut coverslip was glued with dental cement on top of the skull (Fig. 1a), with the edges of the window reinforced with a thicker mix of dental cement similar to the procedure of (Silasi et al., 2016).Mice recovered for at least 7 d before imaging or head-fixation.

Social hierarchy measurements
Social rank was estimated using the tube-test assay (Fan et al., 2019) in a subset of eight mice from two cages.Briefly, mice were introduced to either end of a narrow Plexiglas tube (32 cm long, 2.5-cm inner diameter).Upon meeting in the middle, mice compete by pushing each other to get to the opposite side.The mouse which pushes the other back out of the tube is deemed the winner.A plastic barrier was inserted at the midpoint of the tube and was removed when both mice approached the middle to avoid biases.All combinations of mice within a cage were tested in a round robin format to determine the linear hierarchy.Tube test tournaments were repeated for four trials.

Dual mouse imaging experiments
Two Raspberry Pi imaging rigs were set up facing each other, and initially separated by 14 cm.A parts list and assembly instructions for the Raspberry Pi widefield imaging rig are included in the Open Science Framework project (see link below, in Resource availability section).One imaging rig was placed atop a translatable rail (Sherline 5411 XY Milling Machine Base), which was driven by a stepper motor to bring the mouse (hereafter referred to as the moving mouse) into the proximity of the other mouse (stationary mouse, 6-to 12-mm intersnout distance; see Table 1 and Open Science Framework project for details).Stage translation occurred 2 min after imaging began, and lasted ;27.5 s.Animals were imaged together for 2 min before they were returned to their separated orientation, and imaged for another 2 min.Thus, we imaged dorsal cortical activity from two head-restrained mice simultaneously, while varying the distance between snouts (Fig. 1b,c; Movie 1).Randomization was not used to allocate animals into groups because of the relatively small sample size of mice, however, all combinations of cagemate mouse pairs were tested.Cagemate and noncagemate designations were determined early at weaning before any experiments.In some experiments, a partition, either a copper mesh or an opaque cardboard sheet, was placed in front of the stationary mouse and remained there throughout the experiment.Each partition prevents physical contact between whiskers; however, the mice can likely see each other through the copper mesh, and smell each other through either partition.Mice were habituated to the system for at least one week before conducting experiments by head-fixing the animals each day and performing all procedures [e.g., translation, switching light-emitting diodes (LEDs) on/off] without the presence of the other mouse.
The entire imaging system was housed inside a box lined with acoustic foam which reduced ambient light and noise.Throughout the experiment, audio recordings were obtained at 200 kHz using an ultrasonic microphone (Dodotronic, Ultramic UM200K) positioned within the recording chamber ;5 cm from each mouse's snout.Audio recordings were analyzed for ultrasonic vocalizations using the MATLAB toolbox DeepSqueak (Coffey et al., 2019).

Behavior imaging
The experimental setup was illuminated with an infrared (850 nm) light-emitting diode (LED), and behaviors were monitored using an infrared Raspberry Pi camera (OmniVision, OV5647 CMOS sensor).Behavior videos were captured at a framerate of 90 frames per second (fps) with a resolution of 320 Â 180 pixels.The camera was positioned such that the stationary mouse was always included in the field of view and both mice were visible when they were together (Fig. 2a).

GCaMP image acquisition
GCaMP activity was imaged using RGB Raspberry Pi Cameras (OmniVision OV5647 CMOS sensor).The GCaMP imaging cameras had lenses with a focal length of 3.6 mm with a field of view of ;10.2 Â 10.2 mm, leading to a pixel size of ;40 mm, and were equipped with triplebandpass filters (Chroma 69013m), which allowed for the separation of GCaMP epifluorescence signals and reflectance signals into the green and blue channels, respectively.Twenty-four-bit RGB images of GCaMP activity and reflectance were captured at 28.9 fps and 256 Â 256 resolution.The three cameras (two brain and one behavior) were configured such that one camera was used to start the acquisition of the other two.While mesoscale GCaMP imaging can be achieved with single wavelength illumination (Gilad and Helmchen, 2020;Nakai et al., 2023;Ramandi et al., 2023), in this experiment, each cortex was illuminated using two LEDs simultaneously, where one light source (short blue, 447.5 nm Royal Blue Luxeon Rebel LED SP-01-V4 with Thorlabs FB 440-10 nm band pass filter) provides information about light reflectance during hemodynamic changes (Xiao et al., 2017), and the other light source (long blue, 470 nm Luxeon Rebel LED SP-01-B6 with Chroma 480/30 nm) excites GCaMP for green epifluorescence.For each mouse, the light from the excitation and hemodynamic reflectance LEDs were directed into a single liquid light guide which was positioned to illuminate the cortex (Fig. 1b).Further details can be found in the Parts List and assembly instructions document found in the Open Science Repository (https:// osf.io/96bqv).Each LED was driven by a custom LED driver which was triggered by the Raspberry Pi to turn on at the start of the trial and off at the end of the trial.This sudden change in illumination was used during post hoc analysis to synchronize frames across cameras.With the current recording setup, the Raspberry Pi cameras occasionally drop frames (on average 0.1% of frames) as a result of writing the data to the disk.We identified the location of dropped frames by tagging each frame with a timestamp and found that consecutive frames were rarely dropped.Given the small number of dropped frames, and the relatively slow kinetics of GCaMP6s (Chen et al., 2013), the lost data were estimated by interpolating the signal for each pixel.

GCaMP image processing
Image preprocessing was conducted with Python using a Jupyter Notebook (Kluyver et al., 2016).Further analysis was conducted using MATLAB (The MathWorks).Green and blue channels, which contain the GCaMP6s fluorescence and the blood volume reflectance signals, respectively (Ma et al., 2016;Wekselblatt et al., 2016;Valley et al., 2020), were converted to DF/F 0 .The baseline image, estimated as the mean image across time for the entire  To correct for potential hemodynamic artifacts (Ma et al., 2016), blue light (440 6 5 nm) reflectance DF/F 0 was subtracted from the green fluorescence DF/F 0 .In this way, small changes in the brain reflectance because of blood volume changes do not influence the epifluorescence signal.While we acknowledge that a green reflectance strobing and model-based correction may be advantageous (Ma et al., 2016), certain technical aspects of the Raspberry Pi camera such as its rolling shutter and inability to read its frame exposure clock prevent this method from being implemented.The short blue wavelength (447 nm) with 440 6 5-nm filter is close to an oxy/ deoxygenated hemoglobin isosbestic point, and the reflected 447-nm light signal was found in prior work to correlate well with reflected 530-nm green light signal (Xiao et al., 2017), which is typically used to estimate signal changes resulting from hemodynamic activity (Ma et al., 2016).Moreover, the 447-nm LED produces very little green epifluorescence signal at the power used to assess reflectance from the dorsal cortical surface, and expected hemodynamic changes are relatively small compared with the signal-noise-ratio of GCaMP6s (Dana et al., 2014).Examples of corrected and uncorrected signals using this method can be seen in prior work (Xiao et al., 2017;Murphy et al., 2020).
Occasionally, noisy extreme pixel values for DF/F 0 were observed because of imaging near the edge of the window or because of small F0 in the denominator of DF/F 0 .To reduce their contribution, pixels exceeding a threshold value were set to be equal to the threshold, thereby reducing artifacts from smoothing or filtering that might result from inclusion of abnormally large DF/F 0 values.The threshold was set at the mean 6 3.5Â the SD of each pixel's time-series for GCaMP data, and at 15% DF/F 0 for the reflectance data (which is much larger than expected reflectance signal values).The DF/F 0 signal was then smoothed with a Gaussian image filter (s ¼ 1) and filtered using a fourth order Butterworth bandpass filter (0.01-12.0 Hz; Vanni and Murphy, 2014).

Behavior quantification
To extract behavior events, a region of interest (ROI) was manually drawn on the behavior video over each mouse's whiskers and forelimbs (Fig. 2a).The motion energy within each ROI was calculated by taking the absolute value of the temporal gradient of the mean pixel value within the ROI.The resulting motion energy was smoothed via convolution with a Gaussian kernel (s ¼ 25 frames) and a threshold was established at the mean 1 1 SD to detect behaviors (Extended Data Fig. 2-1a).This analysis captured whisker and forelimb movements for the stationary mouse for the entirety of the experiment, and for the moving mouse only during the interaction phase (Fig. 2c; Extended Data Fig. 2-1b), when the moving mouse was in the behavior frame.Behavior data from two trials were excluded because of poor illumination, resulting in the inability to resolve whisker movements.

Interbrain correlation analysis
Correlation across brains was calculated using the Pearson's correlation coefficient (PCC).To compare correlations across trial phases, the interbrain PCC was calculated for a 1-min period during initial-separate, together, and final-separate trial phases.Global signals were calculated as the median DF/F 0 across the entire dorsal cortex.Time-varying coherence between global signals was estimated with multitaper methods over a 45-s window with 22.5-s overlap using the MATLAB Chronux toolbox with a time-bandwidth product of 5 and a taper number of 9 (Bokil et al., 2010;Mitra and Bokil, 2009).Individual regions were selected from coordinates with respect to bregma (Fig. 3g), and their placement was verified after alignment with the Allen Institute Common Coordinate Framework brain atlas (Wang et al., 2020;Saxena et al., 2020), which used key-points along bregma, the superior sagittal sinus, and the space between the olfactory bulb and the frontal cortex.It should be noted that while these key-points can provide a reasonable alignment of the data with the cortical atlas, using key-points obtained with functional mapping may be a more accurate method.These coordinates yielded expected functional networks (Vanni et al., 2017) as identified by calculating seed pixel correlation maps (Extended Data Fig. 3-2).Region-specific time-series data were calculated as the median activity within a five by five-pixel area within each region location.

Ridge regression analysis
Analysis followed the protocol and incorporated MATLAB code provided previously (Musall et al., 2019).DF/F 0 data from the stationary mouse from four consecutive trials were brain masked, concatenated, and then reduced to the first 200 principal components using singular value decomposition (Musall et al., 2019).Timestamps of relevant behavior events from the stationary mouse (whisker movement alone, whisker movement together, forelimb movement) and the moving mouse (whisker movement together, forelimb movement together), and trial-associated events (stage translation on, moving mouse approach, moving mouse leave) corresponding to each trial were also concatenated (Fig. 4a).A design matrix (Musall et al., 2019) was then created for each of these binary event variables, where behavior events included every sample after the event up to 2 s, mouse approach and mouse leaving events included 5 s before or after the approach or leave event, respectively, and the stage translation events included every sample after the event up to 2 s.The design matrix was then fitted against the reduced neural data using ridge regression and the variance explained was obtained with 10-fold crossvalidation.To estimate the explained variance of each model variable individually, a reduced model was created where the specified model variable was randomly permuted (Musall et al., 2019), and the explained variance was computed for this reduced model.The difference in explained variance between the full model and the reduced model shows the unique contribution of the specified model variable.

Statistics
Statistical tests (Table 2) were conducted with MATLAB.All data were tested for normality using a Kolmogorov-Smirnov test before subsequent statistical analyses.Correlation values were transformed using Fisher's z-transformation.Comparisons between two groups were conducted using two-tailed t tests for parametric data and Wilcoxon signedrank tests for nonparametric data.Comparisons between trial phases were assessed using a repeated measures ANOVA with post hoc Bonferroni correction for multiple comparisons.Results are presented as mean 6 SD.All statistically significant results were observed on the GCaMP signals alone as well as the hemodynamic corrected signals.

Resource availability
Resources to assist in building cortex-wide GCaMP imaging systems, including parts lists, assembly instructions, and CAD files are available at the Open Science Framework project entitled Dual Brain Imaging (https:// osf.io/afuzn/).Code for image acquisition, preprocessing, and analysis are available at the University of British Columbia's Dynamic Brain Circuits in Health and Disease research cluster's GitHub (https://github.com/ubcbraincircuits/dual-mouse).Data are available in the Federated Research Data Repository at https://doi.org/10.20383/101.0303.

Meso-Py
We provide a flexible open-source system to capture mouse mesoscale cortical optical signals.The system is built around a robust 3D positioning system employing inexpensive milling machine X, Y, and Z translators.We provide Python-based software and open-source hardware that makes extensive use of 3D-printed parts and off-theshelf optics to provide cost-effective imaging systems sufficient to measure GCaMP signals.An advantage of the compact nature of the imaging systems is that they can be placed in close proximity to one another, allowing capture of optical signals from two mice that move in relation to each other.Having one imaging system mounted on a rail and the other stationary allows the creation of staged and constrained interactions that are well suited for examining interbrain correlated activity.Our experiments consisted of three major temporal periods: mice apart, together where nose-to-nose distance was 6-12 mm, and apart again.Most analysis was performed while mice were stationary and apart, or together and not moving.Based on published studies of whisker geometry, we estimate that the whiskers of the two mice would contact each other over 12 mm of nose to nose spacing (Carvell and Simons, 2017).We also confirmed this using high-speed video where individual whiskers could be measured and overlap confirmed (Fig. 1d).

Mice exhibit correlated bouts of behavior
Forelimb and whisker movements were monitored for each mouse to measure behavior (Fig. 2a) using a camera positioned underneath the interacting mice (Fig. 1c).The stationary mouse's behavior was captured throughout the entire experiment, while the moving mouse's behavior acquisition was limited to the interaction period only (Fig. 2c).Bouts of forelimb and whisker movements often occur simultaneously (Fig. 2b), and the amount of time spent actively moving whiskers or forelimbs, expressed as percentage of time spent behaving in each trial phase, did not change between the separate period and the interaction period (n ¼ 33 trials, 14.1 6 3.4% whisking separate vs 14.4 6 4.3% whisking interaction, and 8.9 6 3.8% forelimb separate vs 9.8 6 4.8% forelimb interaction period, n ¼ 33 trials, p ¼ 0.77, paired t test; Fig. 2b), and the percentage of time spent whisking was not found to be significantly different when compared with a subset of trials where the partner mouse was either absent or kept at the full distance for the entirety of the trial (n ¼ 7 trials, 15.2 6 3.0% whisking alone or separated vs n ¼ 33 trials, 14.4 6 4.3% whisking during interaction; p ¼ 0.69, Wilcoxon rank-sum test).The total number of behavior events did not differ across trial phases as well (Extended Data Fig. 2-2).To assess the temporal relationship between behaviors across animals, the cross-correlation was computed.The crosscorrelation can be interpreted as an estimate of the correlation between two signals as one signal is shifted in time, resulting in a measure of similarity between the two signals as a function of time shifts (e.g., time lag).Behaviors across mice exhibited temporal coordination as shown by the peak in the cross-correlation of binary behavior vectors at time lag 0 s that exhibited a rise and fall over 20 s (Fig. 2d, black).The cross-correlation of behavior vectors across mice with mesh (Fig. 2d, red) or opaque (Fig. 2d, blue) barriers placed between them was significantly reduced at lag 0 compared with trials without barriers where whisker touch is possible (one-way ANOVA with post hoc Bonferroni test, F (2,42) ¼ 10.2, p ¼ 2 Â 10 À4 , open n ¼ 33 trials, mesh n ¼ 16 trials, opaque n ¼ 11 trials; Fig. 2d).Intersection over union for the two behavior vectors, which measures the ratio of shared behavior events (e.g., A and B behave) to all behavior events (e.g., A or B behave), was significantly reduced across animals when mesh or opaque barriers were placed between them (one-way ANOVA with post hoc Tukey-Kramer test, F (2,42) ¼ 10.1, p ¼ 2.6 Â 10 À4 , open n ¼ 33 trials, mesh n ¼ 16 trials, opaque n ¼ 11 trials; Fig. 2e).

Interbrain synchronization across multiple cortical regions
Global signals were calculated as the spatial median DF/F 0 across the entire masked region of the two cortical hemispheres (Fig. 3a,b).The Pearson's correlation coefficient (PCC) between global signals from each mouse was significantly higher during the interaction phase (0.19 6 0.23 interaction phase) of the experiment than during either of the two separate phases (À0.007 6 0.13 before; À0.01 6 0.14 after; repeated measures ANOVA, n ¼ 35 trials, trial-phase: F (1,33) ¼ 11.2, p ¼ 0.002; Fig. 3c).Interbrain PCCs during the interaction phase were also significantly higher than PCCs calculated across trialshuffled global signal pairings during the interaction phase (0.19 6 0.23 interaction phase vs 0.02 6 0.14 trialshuffled, n ¼ 35 correct pairs vs n ¼ 595 shuffled pairs, p ¼ 1.2 Â 10 À11 , two sample t test; Fig. 3c).Cagemate versus noncagemate pairings did not have a significant effect on interbrain correlation (two-sample t test, p ¼ 0.66; Fig. 3d).A subset of the experiments included back-to-back repeat pairings of the same animals.PCCs were not found to be significantly different between first and second pairings (0.22 6 0.28, n ¼ 6 first pairings; 0.19 6 0.15, n ¼ 6 s pairings; p ¼ 0.82, paired sample t test).Additionally, the increase in interbrain correlation was not observed in experiments using the Thy1-GFP mouse line (Feng et al., 2000), which expresses a neuronal activity independent fluorophore in neurons, but is still susceptible to hemodynamic artifacts (n ¼ 4 trials, p ¼ 0.44, one-way ANOVA; Extended Data Fig. 3-1).An expanded view of the interaction period from Figure 3b is shown with behavior annotations in Figure 3e.Prominent calcium events are often accompanied by sustained periods of behavior (Fig. 3e).The duration of the behavior event, measured as the duration of each threshold-crossing movement detected in either forelimb or whisker ROIs (Extended Data Fig. 2-1), was positively correlated with the average DF/F 0 during the behavior (n ¼ 2075 behavior events, Spearman correlation r ¼ 0.40, p ¼ 6.0 Â 10 À81 ; Fig. 3f).Interbrain correlations on a region-by-region basis, including putative motor areas [whisker motor cortex (wM1), secondary motor cortex (M2), anterior lateral motor cortex (ALM)], sensory areas [primary visual cortex (V1), forelimb (FL), hindlimb (HL), and anterior barrel cortex (aBC), posterior barrel cortex (pBC), retrosplenial (RS)], and parietal association area (PTA; Fig. 3g), were greater during the interaction period compared with the separated periods, with no significant differences found between cortical regions (0.17 6 0.03 together vs À0.013 6 0.009 before and À0.019 6 0.02 after interaction; two-way ANOVA with post hoc Bonferroni correction, n ¼ 35 trials, trial phase: F (1,680) ¼ 136, p ¼ 1.0 Â 10 À28 ; brain regions: F (9,680) ¼ 0.81, p ¼ 0.61; interaction: F (9,680) ¼ 1.2, p ¼ 0.30; Fig. 3h,i).Example seed pixel correlation maps are shown in Extended Data Figure 3-2.Intrabrain correlations also showed a slight but significant increase during the interaction period, with no significant effects observed across individual brain regions (two-way ANOVA with post hoc Bonferroni correction, n ¼ 35 trials, trial phase: F (1,680) ¼ 44.1, p ¼ 6.3 Â 10 À11 ; brain regions: F (9,680) ¼ 0.30, p ¼ 0.98, interaction: F (9,680) ¼ 0.01, p ¼ 1.0; Extended Data Fig. 3-3).Coherence between brains was calculated to determine the relationship between global cortical signals as a function of frequency.Time-varying coherence between brains revealed an increase in global signal coherence during the interaction phase at frequencies below 0.2 Hz (Fig. 3j).Experiments with physical barriers in place (opaque cardboard sheet or a transparent copper mesh) to prevent whisker-whisker contact between mice did not result in significant increases in region by region interbrain correlation during the interaction phase (opaque trials: 0.023 6 1.3 together vs À0.024 6 0.09 before and 0.03 6 0.13 after, repeated measures ANOVA, n ¼ 15, F (2,28) ¼ 1.0, p ¼ 0.34; mesh trials: 0.09 6 0.22 together vs À0.011 6 0.11 before and À0.012 6 0.11 after, repeated measures ANOVA n ¼ 16, F (2,30) ¼ 0.022, p ¼ 0.88; Fig. 3k,l).Although barriers reduced average intermouse correlation, it is possible that some animals can sense each other at a more considerable distance.However, no ultrasonic vocalizations were detected during these experiments (Extended Data Fig. 3-4).Videos of all intermouse interactions are available so that readers can also assess interactions as the mice approach each other (see Open Science Framework link https://osf.io/teshq/).

Cortical representations of behavior events
The extent to which behavior and trial events (mice together, apart, etc.) co-vary with the stationary mouse's cortical activity was assessed using ridge regression (Musall et al., 2019).A set of predictor variables, which included behaviors from the stationary mouse throughout the trial, behaviors from the moving mouse during the together period, and events associated with the trial structure (Fig. 4a) were used to model the cortical activity from the stationary mouse.The ridge regression analysis was performed on the pooled data from a specific mouse for all trials in the open interaction condition with no barrier (Fig. 4), as well as in the mesh barrier conditions that were expected to reduce whisker contact (Extended Data Fig. 4-1).Across nine open-interaction trials, the ridge regression (behavior to brain activity) model predicted on average 30% of the GCaMP signal variance across the dorsal cortex and reached up to 54% for some areas within somatosensory cortex (Fig. 4b).In the same mouse across four mesh barrier trials (Extended Data Fig. 4-3a), the model performance was similar: explaining on average 33% of the variance in the cortical GCaMP signal across the dorsal cortex and up to 53% in somatosensory cortex (Extended Data Fig. 4-1b).To determine each variable's nonredundant contribution to the total explained variance, the explained variance was computed after randomly permuting each of the single model variables.The difference in explained variance (DR 2 ) shows the unique contribution of the permuted variable toward explaining the variance in the data (Musall et al., 2019).Unique contributions for the stationary mouse's forelimb movements showed strong linkages with somatosensory areas (Fig. 4c;.In the open interaction trials, whisking movements initiated by the stationary mouse differed depending on the context (alone vs together), with an increase in explained variance observed in the posterolateral areas of cortex near whisker sensory areas when the mice were together (Fig. 4d,e; Extended Data Figs.4-1, 4-2).Behaviors of the moving mouse also had some contribution toward the cortical activity of the stationary mouse in the open interaction trials, where movingmouse forelimb or whisker movements co-vary with posterolateral cortical areas near the whisker somatosensory network in the stationary mouse (Fig. 4f,g; Extended Data Figs.4-1, 4-2).In contrast, in the mesh barrier trials, whisking together compared with separately did not exhibit a localized increase in explained variance in the posterolateral cortical areas e).Additionally, partner behaviors explained little variance in the stationary mouse cortical activity when the barrier was present g).In both open and mesh barrier trials, trial-associated events (for example stage translation and moving mouse approach/leave) had minimal contributions to the explained variance of the stationary mouse's cortical GCaMP activity  and present as broadly distributed noise rather than discrete cortical maps containing known circuit motifs.

Discussion
We introduce a flexible, low-cost Raspberry Pi-based mesoscale imaging platform (;2750 USD per rig).The dual mouse imaging system can create reproducible interactions between mice that constrains some of the possible behaviors and timing because of the head-restrained and rail-based system.This type of head-restrained social interaction may be useful for studying the neural correlates of whisker investigation of social partners in barrel cortex.Our results indicate widespread correlated cortical activity between the brains of interacting mice.This synchrony is not associated with the mechanics or timing of the imaging paradigm as it was not present when trial-shuffled mouse pairs were examined; and is not attributed to hemodynamic artifacts in the fluorescence signal, as synchronization was not observed across pairs of GFP mice.The low frequency coherence observed between brains and the greater temporal overlap of behaviors when the mice are together, suggest that the interanimal cortical synchronization observed in this work may be driven by temporally correlated bouts of behavior (e.g., whisking or forelimb movements).Although previous work found that the magnitude of interbrain synchronization may convey information regarding the social status of one of the individuals (Jiang et al., 2015;Kingsbury et al., 2019), we did not find a relationship between social rank differences (determined by the tube test assay) and cortex-wide interbrain synchronization in our experiments.This discrepancy might be attributed to the head-restrained nature of the interaction; however, a more rigorous examination (with greater statistical power) of the social hierarchy may be needed to determine whether there is a relationship between social rank and interbrain correlation.
Using ridge regression (Musall et al., 2019), we assessed the degree to which different behavioral variables correlate with cortical GCaMP activity in a region-specific manner.The robust explained variance maps for forelimb and whisker movements in the stationary mouse are consistent with recent findings that movements drive widespread neural activity across the cortex.Although lower in magnitude, moving-mouse behaviors also contributed to cortical GCaMP activity variance in the stationary animal, and the regional distribution qualitatively resembled the explained variance maps observed when the stationary mouse whisks in the presence of a partner, but not alone, suggesting the engagement of different cortical regions during this context.This may reflect cortical activity arising from physical contact made between whiskers of the mice, as whisking against the mesh barrier similarly explained variance in GCaMP activity in the posterolateral regions of cortex.
One limitation of the presented work is that the frame rate of the behavior camera was not fast enough to clearly resolve whisker movements.Detailed analyses of whisker movements in mice typically use camera acquisition rates of ;500 fps (Sofroniew et al., 2014;Mayrhofer et al., 2019).It is possible that some whisking events were missed by our analyses, or the precise timing of whisk initiation was not accurately resolved.However, false-negative error rates should presumably be consistent across experiments.Addition of a second behavior camera on the moving stage is also possible and would allow for characterization of the moving mouse's behavior during periods when the mice are separated.Another important consideration for interpreting the results here is that mice must be head-restrained to be imaged and positioned properly.In a previous study, head-fixation was found to be aversive, but with training and habituation, stress recedes (Z.V. Guo et al., 2014), and rodents can even be trained to restrain themselves (Murphy et al., 2020;Scott et al., 2013;Aoki et al., 2017).Furthermore, recent work indicates that head-fixed animals can still exhibit high levels of task performance in water reaching tasks (Galiñanes et al., 2018), or other licking-based choice tasks (Murphy et al., 2020;International Brain Laboratory et al., 2021) when both head-fixed and nonhead-fixed task versions were directly compared.
We therefore present the results as an interaction that occurs in the context of head-fixation and caution that the observed brain dynamics may not reflect true naturalistic social touch behavior.Despite this, head-restraint facilitates consistent and reproducible interactions between animals, allowing for trial-averaging of behaviors.Recent development of a head-mounted mesoscopic camera allows for the exciting possibility to examine cortex-wide neural dynamics during more naturalistic social interactions in freely-moving mice (Rynes et al., 2021).Our intention in designing this system was to reduce visual stimuli by lighting the scene under infrared light.Under such conditions it is unclear how well the mice could visually perceive each other.Future experiments could investigate visual signals related to the expectation of an upcoming social interaction by incorporating appropriate ambient lighting to allow for clear visual detection of an approaching partner.Additional experiments can also incorporate simultaneous electrophysiological (Xiao et al., 2017) or fiber photometry (Ramandi et al., 2023) recordings in subcortical targets such as hypothalamus, ventral striatum, or hippocampus for example, to assess correlations between dorsal cortex and socially relevant activity in these deep structures (Dölen et al., 2013;Okuyama et al., 2016;Rogers-Carter et al., 2018;Walsh et al., 2018).

Figure 1 .
Figure 1.Setup for dual mouse brain imaging system.a, Cartoon depiction of surgical preparation for transcranial mesoscale imaging, with custom cut coverslip and titanium bar for head fixation.x denotes the location of bregma.b, Close-up view of mouse positioning during the interaction phase of the experiment.c, Larger field of view render of the imaging system.Numbered components are as follows: (1) Raspberry Pi brain imaging camera; (2) GCaMP excitation and hemodynamic reflectance LED light guide; (3) ultrasonic microphone; (4) stationary mouse; (5) moving mouse; (6) Raspberry Pi infrared behavior camera; (7) stage translation knob; 8) stepper motor with belt controlling stage translation.Blue arrows indicate direction of motion of the translatable rail.d, Example image of whisker overlap during a whisking event captured using a high-speed camera.Image is spatially high-pass filtered to accentuate whiskers.
Filter, CWL ¼ 620 6 2 nm, FWHM ¼ 10 6 Filter, CWL ¼ 440 6 2 nm, FWHM ¼ 10 6 subtracted from each individual frame (DF).The result of this difference was then divided by the mean image, yielding the fractional change in intensity for each pixel as a function of time (DF/F 0 ).

Movie 1 .
Cortical signal responses for interacting mice.Data from two head-restrained tTA-GCaMP6s mice at the onset of the interaction phase.The two pseudo-colored images show the processed calcium images from the left and right mouse.The color bar indicates the DF/F 0 of the signals.Behavior video is shown below.[View online]

Figure 2 .
Figure 2. Mice coordinate behavior during interaction.a, Example infrared (IR) bottom camera image of both mice during the interaction phase of the experiment.Regions over each mouse's whiskers and forelimbs are shown in magenta and cyan boxes, respectively, to estimate motion.b, Average percentage of time spent behaving during the first separated phase of the experiment (top) and the interaction phase (bottom).Intersecting regions show concurrent whisker and forelimb movements (;5% of total time).c, Timeline of experimental paradigm (top) and ethograms for the stationary and moving mouse (bottom).d, Cross-correlation of each mouse's binary behavior vectors during the interaction phase where whiskers could freely touch open (black) versus during trials where a mesh (red) or opaque (blue) barrier were placed between the animals, preventing physical touch.Behaviors across mice during the interaction phase were significantly correlated near 0 lag when there was no barrier present (open condition).e, Intersection over union (Jaccard similarity index) for the behavior vectors was significantly greater during the interaction phase across mice for trials with no barriers present compared with mesh or opaque barrier trials.n ¼ 33 mouse pairs; one-way ANOVA with post hoc Bonferroni test for multiple comparisons; F (2,42) ¼ 10.1; p ¼ 2.6 Â 10 À4 .See Extended Data Figures 2-1 and 2-2 for additional data.Asterisk indicates significance.

Figure 3 .
Figure 3. Interbrain synchronization during interaction.a, Example images of dorsal cortical windows for stationary mouse (top) and moving mouse (bottom).b, Representative example of median GCaMP6s activity across the cortical mask for each mouse.c, Pearson correlation coefficients of the two global signals were significantly greater during the interaction phase than either of the two separate phases (n ¼ 35 mouse pairs, p , 0.001; repeated measures ANOVA with post hoc Bonferroni correction for multiple comparisons).Interbrain correlations during interaction were significantly greater than trial-shuffled interaction-phase pairings (n ¼ 35 mouse pairs vs n ¼ 595 shuffled mouse pairs, p , 0.001; t test).d, Interbrain signal correlation does not depend on cagemate (CM) versus noncagemate (NCM) experiments (t test, p ¼ 0.66, cagemates n ¼ 20 mouse pairs, noncagemates n ¼ 13 mouse pairs).e, Expanded view of global signals during interaction phase with behavior annotations overlaid.f, Global cortical signal DF/F 0 is positively correlated with duration of whisking or forelimb movement (Spearman correlation coefficient; r ¼ 0.40; p , 0.001).g, Example image of transcranial mask with putative cortical regions labeled.Abbreviations: ALM, anterior lateral motor cortex; M2, secondary motor cortex; wM1, whisker motor cortex; aBC, anterior barrel cortex; pBC, posterior barrel cortex; HL, hindlimb; FL; forelimb; lPTA; lateral parietal association area; RS, retrosplenial cortex; V1, primary visual cortex.h, Averaged interbrain correlation matrices across all experiments during the period before interaction (left) and the period during interaction (right).i, Change in interbrain correlation for each region of interest against all other regions, averaged across mice (n ¼ 35 mouse pairs, *p , 0.05; two-way ANOVA with post hoc Tukey-Kramer test).Bars show mean 6 SE.j, Time-varying interbrain coherence, computed with a 45-s window on the global cortical signals, and averaged across all experiments, shows an increase in coherence from 0 to 0.2 Hz during the interaction phase (white dashed lines).k, l, Pearson correlation coefficients of the two global signals showed no significant difference between trial phases when an opaque partition (k; p ¼ 0.34, repeated measures ANOVA) or copper mesh (l; p ¼ 0.88, repeated measures ANOVA) were placed between the mice, preventing whisker contact.Asterisks indicate significance.NS indicates not significant.See Extended Data Figures 3-1, 3-2, 3-3, and 3-4 for additional data.

Figure 4 .
Figure 4. Ridge regression model in open interaction trials confirms somatotopic representation of whisker and forelimb behaviors across animals.a, Binary event vectors (colored lines) considered in the ridge regression model.Model included nine separate interaction trials which had been concatenated together (dotted lines; partner limb and whisker behavioral variables were only assessed during the together phase).b, Explained variance for the full model after 10-fold cross-validation, projected back onto the cortical map.Scale bar: 2 mm.c-e, Unique contribution for each stationary mouse behavioral model variable; taken as the difference in explained variance between the full model and the reduced model with the specified variable randomly permuted.f, g, Same as c-e, except for the partner mouse behaviors.h-j, Same as c-e, except for trial-associated events.See Extended Data Figures 4-1, 4-2, 4-3, and 4-4 for additional data.

Table 1 :
Parts list for social interaction system