Pi USB Cam: A Simple and Affordable DIY Solution That Enables High-Quality, High-Throughput Video Capture for Behavioral Neuroscience Research

Abstract Video recording is essential for behavioral neuroscience research, but the majority of available systems suffer from poor cost-to-functionality ratio. Commercial options frequently come at high financial cost that prohibits scalability and throughput, whereas DIY solutions often require significant expertise and time investment unaffordable to many researchers. To address this, we combined a low-cost Raspberry Pi microcomputer, DIY electronics peripherals, freely available open-source firmware, and custom 3D-printed casings to create Pi USB Cam, a simple yet powerful and highly versatile video recording solution. Pi USB Cam is constructed using affordable and widely available components and requires no expertise to build and implement. The result is a system that functions as a plug-and-play USB camera that can be easily installed in various animal testing and housing sites and is readily compatible with popular behavioral and neural recording software. Here, we provide a comprehensive parts list and step-by-step instructions for users to build and implement their own Pi USB Cam system. In a series of benchmark comparisons, Pi USB Cam was able to capture ultra-wide fields of view of behaving rats given limited object distance and produced high image quality while maintaining consistent frame rates even under low-light and no-light conditions relative to a standard, commercially available USB camera. Video recordings were easily scaled using free, open-source software. Altogether, Pi USB Cam presents an elegant yet simple solution for behavioral neuroscientists seeking an affordable and highly flexible system to enable quality video recordings.


Introduction
Video recorded data are a crucial component of behavioral neuroscience research. While a number of commercially available recording solutions exist, few, if any, satisfy all or most of the needs of a typical laboratory. Many behavioral assays, particularly those intended for use with nocturnal species, are conducted under low-light or nolight conditions to mimic the portion of the circadian cycle during which animals are the most active. Therefore, an ideal video recording system enables video acquisition under both ambient and infrared (IR) illumination. Moreover, spatial limitations imposed by testing apparatuses frequently require capture of a large field of view from a relatively short object distance. Although some commercially available options designed specifically for behavioral neuroscience research meet these criteria, they are often expensive and unique to a specific testing apparatus limiting scalability and flexibility of use. Given these limitations, researchers frequently repurpose commercial USB webcams for video acquisition because of their affordability and accessibility. However, webcams are rarely intended for recording under lowlight to no-light conditions, nor for capturing a large field of view under circumstances when subjects are at a relatively short distance from the camera.
DIY solutions that make use of low-cost, single board microcomputers such as Raspberry Pi have gained the attention of behavioral neuroscientists as laboratories attempt to address the shortcomings of existing commercially available solutions. For example, Singh et al. (2019) developed a powerful web-based interface for long duration, remote video recording and streaming. However, lack of active maintenance and development quickly rendered it obsolete as the current generation of Raspberry Pi board (4B) no longer supports the legacy OS (Raspbian Stretch) for which it was developed. Recently developed solutions that use the Raspberry Pi Camera in its most basic form to acquire video are less demanding in terms of debugging and updating (Saxena et al., 2018;Weber and Fisher, 2019;Clemensson et al., 2020;Centanni and Smith, 2021). However, these out-of-the-box options suffer from limited functionality, often lacking the capacity to live preview and record at the same time, or interface with closed-loop behavioral control systems. As a result, DIY solutions frequently require a degree of programming knowledge to properly configure and adapt the system to specific research needs thereby limiting broad application of such systems by novice users. Thus, drawbacks associated with both commercially available and DIY video capture solutions pose significant limitations to easy acquisition of video recorded behavioral data.
To address this gap, we designed a versatile, low-cost, DIY video recording solution that requires no specialized expertise or equipment. Pi USB Cam is low-/no-light compatible and accepts a variety of wide-angle lenses.
The system utilizes open-source, actively maintained software that enables true plug-and-play capability. Combined with custom 3D-printed components and freely available open-source video acquisition software, this system offers highly versatile implementation and scalability across numerous behavioral testing conditions. Here, we provide detailed step-by-step instructions for hardware and software setup and demonstrate its ease of use and superiority in terms of field of view and low-light recording over commercial counterparts.

Materials and Methods
Build the camera Pi USB Cam is comprised of a Raspberry Pi board (Raspberry Pi, Cambridge) and a wide-angle day-night vision camera (Arducam Technology Co, Limited). This camera comes equipped with a motorized IR-CUT filter that is automatically triggered based on ambient light intensity to accommodate for both bright-light and low-/ no-light recording, as well as an out-of-the-box 170°[diagonal field of view (DFOV)] Â 140°[horizontal FOV (HFOV)] fisheye lens. The camera uses a 5MP OV5647 sensor, which can support up to 30 frames per second (FPS) at 1080p when used with the suggested firmware. Additionally, the camera comes attached with two 850-nm IR LEDs to facilitate image acquisition in low-/no-light conditions.
We selected the Raspberry Pi four Model B to run the camera, which was the latest release of the main product line at the time of publication. This model and the more affordable and compact Raspberry Pi Zero are readily supported by Show-me webcam, an open-source firmware that enables a Raspberry Pi connected camera to be booted as a simple USB camera. While both models are compatible as the base hardware for building a Pi USB Cam, we chose Pi 4B over Zero because of its added processing power, which is significantly more advantageous if the Pi board were to be repurposed for GUI applications. Of note, while the Raspberry Pi Zero 2 was recently released, it is not yet supported by Show-me webcam, although this is likely to change in near future. Legacy boards like Pi 31 and earlier releases that are no longer widely accessible for purchase are not supported by Show-me webcam.
A complete list of store-bought components is provided in Table 1 with a more comprehensive list including alternative options provided in Extended Data Table 1-1.
Step-by-step instructions Software installation.
1. Insert a clean micro-SD card with a minimum storage of 64MB into your computer (Fig. 1A). 2. Download the latest release of the Show-me webcam image file to your computer from the developer's GitHub page https://github.com/showmewebcam/ showmewebcam (Fig. 1B). Make note to download the image file corresponding to the appropriate Raspberry Pi board model.
3. Download, install, and launch the official Raspberry Pi Imager from https://www.raspberrypi.com/software/ (Fig. 1C). Inside the imager's main interface, click on CHOOSE OS followed by Use custom to locate the image file you just downloaded (Fig. 1C1,C2). Then, click on CHOOSE STORAGE and select the clean micro-SD card (Fig. 1C3,C4). Lastly, click on WRITE followed by YES on the pop-up warning to flash the Show-me webcam image file into the micro-SD card (Fig. 1C5,C6).
Hardware assembly.
1. Gather the essential hardware components shown in Figure 2A and appropriate housings if desired for the Pi board (Table 1) and camera (Table 2). Optional: if you intend to use the camera for low-/nolight recording close to a reflective surface such as Plexiglas, we recommend removing the IR LEDs that come attached to the camera module ( Fig. 2B) at this stage. Doing so will allow for flexible LED placement thereby enabling the user to avoid flaring artifacts in the captured image (Fig. 4A). For more information on how to use and position LEDs independently from the camera refer to "IR LED Setup" and see Figure 4. 2. To connect the camera module to the Pi board, first locate the camera ribbon cable ports on the camera module and the Pi board, as indicated in Figure 2C1. Gently lift the black plastic clip on the ports. Insert the ribbon cable terminal making sure that the silver leads on the cable face the contacts inside the port. Push down the plastic clip to secure the ribbon cable in place (Fig. 2C2). 3. Insert the micro-SD card prepared in Software Installation into the micro-SD card slot on the back of the Pi board to finish the setup (Fig. 2D). 4. To power up and start using the Pi USB Cam, connect it to a host computer via its USB-C port using a USB  2E2). The green built-in LED will fast blink three times after booting, which will take around 5 s, to indicate that the Pi USB Cam is ready for use. This LED will remain illuminated when the Pi USB Cam is in active use (Fig. 2E2). 5. You can optionally house the Pi USB Cam in our custom 3D-printed camera case for protection and use mount tools for easy installation in a variety of behavioral testing configurations (Figs. 2F,(9)(10)(11). STL files and print instructions can be found at our Thingiverse page: https://www.thingiverse.com/gloverlab/designs. Note: the STL files provided can be readily sliced by popular slicing software and printed with consumer-grade 3D printers or commercial pay-to-print service offered by universities and elsewhere. For an estimated cost of printing a full set of our camera and IR LED holders using either printing options, see Extended Data Table 1-2.

Adjusting camera settings
The Arducam day-night vision camera module that we suggest using here has an on-board photoresistor ( Fig.   3A1) that senses ambient light intensity. This allows for automatic control of the IR filter to enable IR sensitivity under low-/no-light conditions (Fig. 3B) and improve color accuracy under bright light conditions (Fig. 3C). However, in scenarios where lighting conditions change dramatically within a single recording or approach the ambient light threshold, one might consider manually disabling the motorized IR filter to prevent automatic IR filter shuttering and keep it in either a permanent ON or OFF state throughout the entire recording session. Covering the photoresistor with nontranslucent tape will prevent ambient light from reaching the sensor thereby enabling IR sensitivity by permanently turning the IR filter OFF (Fig. 3A1). Conversely, to manually enable IR correction, the IR filter can permanently be switched ON by unplugging the motorized IR filter connector from the back of the camera board (Fig. 3A2).
Camera software settings such as brightness and white balance can also be readily adjusted to suit specific recording needs, such as improving the image color when recording under low-light conditions (Fig. 3B). Users can access camera setting parameters by interfacing the Pi USB Cam with a host computer. The following protocol (adapted from https://tutorial.cytron.io/2020/12/29/raspberrypi-zero-usb-webcam/) describes how to access and adjust Step-by-step instructions  3F1). Then set Connection type to Serial (Fig. 3F2), enter the COM port number of the camera under Serial line (Fig. 3F3), and set Speed to 112800 (Fig. 3F4). Click Open to log into the Pi USB Cam (Fig. 3F5). 3. Once logged in, type the following command: /usr/bin/ camera-ctl and press Enter to launch the camera-ctl interface and show all adjustable camera setting parameters (Fig. 3G). These can be adjusted during live preview to see how any changes affect image quality. Optional: we recommend setting Video Bitrate to Maximum (25000000), and Auto Exposure, White Balance, and ISO Sensitivity Auto to Manual (Fig.  3G2), to improve the image coloring under low-light conditions ( Fig. 3B). We also set Brightness to 53 for all lowlight recordings used in this paper. For recording under bright-light conditions, consider reverting White Balance to the default Auto to improve color accuracy (Fig. 3C). However, we encourage experimenting with the setting as it likely varies with the recording condition and recording devices. 4. Once finished, press S to save the changes in the camera setting parameters, or revert back to default by pressing D (resets individual setting to default) or R (resets all settings to default). To terminate the serial connection, first press Q to quit the camera-ctl interface, and then close the PuTTY session window. 5. To adjust the focus on the camera lens, first loosen the screw that secures the lens firmly inside the M12 lens holder, as indicated in Figure 3H, then simply turn the lens clockwise or counterclockwise while monitoring the camera preview until the image becomes sharp.
IR LED setup IR-sensitive cameras need to be coupled with adequate IR illumination to produce high-quality images in low-/nolight conditions. The wide-angle day-night vision camera from Arducam that we suggest using in this configuration comes with two 850-nm IR LEDs attached and powered by the camera board (Fig. 2B). However, recording through reflective surfaces such as Plexiglas across short object distances (e.g., operant box) caused undesirable flare artifacts in the field of view (Fig. 4A). To circumvent this, we removed the LED boards from the main camera board and powered them independently using a pair of 3v3 power and ground pins on the Raspberry Pi board (Extended Data Fig. 2-1). This allowed LED placement independent of the camera enabling even and high-quality illumination of the area of interest (Fig. 4B,C). This section describes the steps to wire two 850-nm LEDs in parallel using custom-made jumper cables and power them with the Pi board itself ( Fig. 4D-G).
Note: the 3v3 power pin of the Pi board can safely provide up to 500 mA of current (according to: https://pinout. xyz/pinout/3v3_power). Exceeding that limit using highpower LEDs may cause a "brownout" of the Pi and potential safety concerns. Our testing indicated that the two IR LED boards that came with the Arducam day-night vision camera draw 200 mA of current from the 3v3 pin when connected in parallel. For higher power LEDs, it is recommended to use either the 5v power pin, which provides a higher current draw of ;1.5 A (according to: https:// pinout.xyz/pinout/5v_power), or an external battery pack or power supply unit. For alternative LED options and related hardware, see Extended Data Table 1-1.
Step-by-step instructions 1. Following the step where the LED boards were taken off from the main camera board (Fig. 2B), cut three segments of 22-gauge red-black electrical wire ( Fig. 4D): (1) a short ;3-cm wire to connect the two LED boards in parallel; (2) a long ;80-cm wire (adjust to specific installation requirements) for connecting the LED unit to the Pi board located some distance from the camera; (3) a short ;6-cm wire to connect wire (2) to a pair of 3v3 power and ground pins on the Pi board. 2. Split wire (1) down the middle into two separate red and black wires. Split one end of wires (2) and (3) by ;3 cm and the other end of each ;1 cm. 3. Crimp the appropriate electrical connectors to the stripped wires as shown in Figure 4D. a. Install two noninsulated ring connectors on either end of wire (1). b. Install two ring connectors on the end of wire (2) separated by ;3 cm and two male pin connectors housed in one 1x2 plastic housing on the other end. c. Install two female pin connectors each housed in 1x1 plastic housings on the end of wire (3) separated by ;3 cm and two female pin connectors housed in one 1x2 plastic housing on the other end. Note: for detailed instructions on how to crimp ring and pin type connectors, see Extended Data Figures 4-1 and 4-2. 4. Wire the LED boards in parallel as shown in Figure 4E,F1.
Note: red wires are used for power (1) and black wires for ground (-). Make sure the polarities of all components align before completing the circuit.
a. Secure the ring connectors on wires (1) and (2) to the LED boards using the screws that came with camera module. b. Mate the two female pin connectors on wire (3) to any pair of 3v3 power and ground pins on the Pi board 40-pin header (e.g., pins 1 and 6 as shown in Fig. 4E).
c. Connect wires (2) and (3) via their respective pin connectors to complete the circuit. 5. Test functionality by powering up the Pi USB Cam and confirming that the LED unit is illuminated under low-/ no-light conditions. Install heat sinks on the back of the LED boards to prevent overheating (Fig. 4F). Optionally house the fully  (1) to prevent overheating after housing them in the custom 3D-printed case (2). G, CAD rendering of the IR LED housing and mount parts (1) and a close-up photograph of a fully assembled and mounted IR LED unit (2).
wired LED unit in the custom 3D-printed case and install the fully assembled unit in any behavioral testing site using the 3D-printed mount tools (Fig. 4G).

Multicamera video acquisition
Pi USB Cam affords a high degree of scalability that is often desired for high throughput experiments. Use of the freely available, open-source video acquisition software, OBS (Open Broadcaster Software) Studio enables preview of live video feeds on screen for real-time behavioral monitoring, live-stream video feeds over Internet, and simultaneous video acquisition from multiple camera sources into separate local files for offline behavioral analysis. OBS Studio is compatible with all three major operating systems; however, we recommend using a Windows PC as the host computer because of added functionalities including video configuration settings that are not available on other platforms. Scalability is limited only by the hardware of the host computer (e.g., number of USB controllers/ports, sufficient CPU, etc.). We recommend using a Windows desktop that allows for add-on USB PCIe expansion cards to provide additional USB controllers and greater bandwidth for multicamera recordings.
The following protocol describes how to set up OBS Studio on a Windows desktop for multi-Pi USB Cam video recordings. For streaming over network, adding audio recording, and many more applications, users can visit the OBS Wiki at https://obsproject.com/wiki/. Note that unless otherwise stated all videos and snapshots of videos included in figures were acquired at 480 p and 30 FPS using OBS Studio.
Step-by-step instructions 1. Connect one or more Pi USB Cams to a host Windows desktop that is optionally installed with USB PCIe expansion cards for added USB bandwidth (Fig. 5A). 2. To make sure that your multicamera configuration will not exceed the USB bandwidth during recording, open Device Manager, click View followed by Devices by connection, and confirm that no more than three Pi USB Cams are connected to each Host Controller (Fig. 5B). 3. Download, install, and launch the latest release of OBS Studio from: https://obsproject.com (Fig. 5C). Note: the default user interface can be customized under the View menu to hide unnecessary features such as the Audio Mixer and Scene Transitions. 4. Users can create one or more "profiles" that can save any set of customized recording settings. To do this, go to the Profile menu, and click New (Fig. 5D1). In the pop-up window, enter a name for the new profile (e.g., "480p30fps" to reflect the recording resolution and frame rate that will be updated in step 5), uncheck the auto-configuration wizard, and click OK to finish creating the new profile (Fig. 5D2). Once a profile is selected, any changes in recording settings will be automatically saved under that profile. Note: one convenient way to use the "profile" feature is to create one for each experimental protocol that specifies its own recording needs and settings. Saved settings for each specific protocol can be quickly applied before each recording session by simply selecting the appropriate profile. 5. To customize recording settings, click Settings on the Controls panel (Fig. 5D3) to open the setting window. Figure 5D4-D6 highlight several changes to the OBS Studio default settings that we recommend for behavior recordings. Click OK to save any changes to the recording settings and to close the pop-up window (Fig. 5D7). 6. In steps 6-9, we provide options to preview and record video from multiple video sources. OBS Studio refers to content being broadcast at any given time as a "scene," whereas a configuration of scenes and their respective video sources is referred to as a "scene collection." Users may want to select multiple scenes and/ or scene collections based on their video needs. For example, multiple video sources can be accommodated within a single scene. This can allow for preview of multiple video sources in tandem. Importantly, recordings in this configuration will include video from each source tiled within a single file (Extended Data Fig.  6-1A). Video from multiple sources can also be previewed across separate scenes within a single scene collection. However, OBS Studio is only able to record from one scene at any given time in a multiple scene configuration like this (Extended Data Fig. 6-1B). In instances when users want to record video from multiple sources in tandem into separate files, we recommend running multiple scene collections each with a single scene and video source in separate instances of OBS Studio (Extended Data Fig. 6-1C). To create one or more "scene collections" that can save any set of "scenes" and "video sources," go to the Scene Collection menu, and click New (Fig. 6A1). In the pop-up window, enter a name unique to your testing apparatus (e.g., "BOX1" for operant box 1) and click OK to finish creating the new scene collection (Fig. 6A2). 7. To add a video source to a scene collection, click on the + button on the Sources panel (Fig. 6A3), and select Video Capture Device (Fig. 6A4). In the pop-up window, enter a name (e.g., "BOX1 cam" for operant box 1 camera) for the new video source, and then click OK (Fig. 6A5). Once the "properties" window shows up, select the right camera source under Device, make additional changes to the device settings, and click OK to finish setting up the new video source (Fig. 6A6). 8. Click Start Recording on the Controls panel to start recording and Stop Recording to stop (Fig. 6A7). 9. For multicamera recording in high throughput experiments (e.g., involving multiple subjects in separate testing sites), repeat steps 6-7 to create one "scene collection" for each camera. Then, launch one more instance of OBS Studio for each additional camera by double-clicking the desktop shortcut (Fig. 6B1) followed by clicking Launch Anyway on the pop-up warning (Fig. 6B2). In each new instance, make sure to select the "scene collection" that contains the correct camera source for the recording site of interest (Fig.  6B3). To confirm that the recording will not overload the computer CPU, monitor the CPU usage of each individual Frame rate analysis Many neuroscience techniques, such as in vivo calcium imaging, combine video recordings with optical signal to correlate behavioral events with neural activity. The potential for dropped or duplicate frames, which we have experienced with commercially available USB webcams, poses nontrivial challenges for offline data analysis in these kinds of experiments. Recordings with dropped frames appear fast forward and have shorter duration during playback by third-party media players, which often assume a nominal frame rate that is higher than the actual frame rate achieved. Recordings with duplicate frames appear choppy, and thus have lower temporal resolution than desired, despite the high frame rate achieved. Both issues result in additional and unnecessary workload during postprocessing to accurately align behavioral and neural data. To evaluate the frame rate performance of Pi USB Cam, videos of freely behaving adult male Long-Evans and Sprague Dawley rats were acquired from inside separate operant boxes (MED Associates) using a Pi USB Cam and a generic commercial webcam (Logitech C930e webcam). Camera performance was assessed under both red and white house light illumination.
For dropped frame analysis, Synapse (Tucker-Davis Technologies TDT), an acquisition platform commonly used in neuroscience research, was used to acquire video recordings from both cameras simultaneously. Five videos of ;5-min duration each were captured into AVI format in each lighting condition from each camera at the maximum resolution (640 Â 480) and frame rate (20FPS) supported by Synapse. For each recording, the total number of frames and timestamps of each frame were read from Synapse data block using MATLAB TDTbin2mat function (https://www.tdt.com/docs/sdk/offline-dataanalysis/offline-data-matlab/overview/). For each video frame, Synapse stores two timestamps: one for the frame onset and one for offset. The onset timestamp of the very last frame was taken as the recording length, as the offset timestamp of the last frame was stored as inf instead of a real number. The true frame rate achieved was calculated using the following equation: true FPS ¼ total frames recording lengthðsÞ : The total number of dropped frames was calculated as: total dropped frames ¼ target FPS Â recording length s ð Þ À 1 À total frames: The number of dropped frames per minute was calculated as: dropped frames per min ¼ total dropped frames recording lengthðsÞ Â 60: The video file length when played in third-party media players was calculated as: video file lengthðsÞ ¼ total frames target FPS : For duplicate frame analysis, OBS Studio was used to acquire five videos of ;5-min duration each into mp4 format in each lighting condition from each camera at 640 Â 480 resolution and 30 FPS. Visual inspection of videos from each camera made clear that those acquired with the commercial webcam contained abundant duplicate frames, whereas those acquired with Pi USB Cam were essentially devoid of duplicate frames. To quantify this, a custom MATLAB script was used to extract relevant information from each recording, including the recording length and total number of frames, and to identify duplicate frames. Color video files were first converted to grayscale and the frame-by-frame difference in grayscale intensity was calculated for each pixel. Plotting the frequency distribution of maximum change in pixel intensity revealed a bimodal distribution apparent only in videos acquired with the commercial webcam. Using the local minimum as a guide, we identified a threshold for maximum pixel intensity change of 8 for videos acquired under red house light illumination and 10 for videos acquired under white house light illumination. Frames that contain a maximum pixel intensity change below or equal to these thresholds were algorithmically identified as duplicates, which matched the duplicates identified manually in videos acquired with the commercial webcam. The true frame rate of each video, which discounts the duplicate frames and more accurately reflects the temporal resolution, was calculated using the equation: true FPS ¼ total frames À total duplicate frames recording lengthðsÞ : The number of duplicate frames per second was calculated as: duplicate frames per min ¼ total duplicate frames recording lengthðsÞ Â 60: Results were expressed as mean 6 SEM.

Fisheye distortion correction
Fisheye camera lenses, such as the one used in the recommended Arducam day-night vision camera, allow for capture of a large field of view given a limited object distance. However, they often produce images that are radially distorted, which may pose challenges to accurate position tracking. Here, we describe a simple method to digitally correct fisheye image distortion using a free, open-source OBS Studio plugin, the OBS ShaderFilter.
Step-by-step instruction

On a Windows PC installed with OBS Studio, download
the latest release of OBS ShaderFilter plugin from its GitHub page: https://github.com/Oncorporation/obsshaderfilter (Extended Data Fig. 8-1A1,A2), unzip the package file, and drag and drop its contents to the OBS program file directory (Extended Data Fig. 8-1A3). The default file location is C:\Program Files\obs-studio. Replace duplicate files if necessary (Extended Data Fig.  8-1A4). 2. Next, download the entire repository from its GitHub page and unzip (Extended Data Fig. 8-1B1). Locate the "fisheye.shader" text file under the obs-shaderfilter-master\data\examples directory, drag and drop to the OBS program file directory at C:\Program Files\obs-studio\data\obs-plugins\obs-shaderfilter\examples to store alongside other shader filters (Extended Data Fig.  8-1B2). Fig. 8-1C1; Fig. 6A). To import prerecorded videos for offline fisheye correction, add a Media Source as the video source (Extended Data Fig. 8-1C2).

Launch OBS Studio and add a video source if you have not (Extended Data
To configure a fisheye camera for real-time fisheye correction during recording, add a Video Capture Device as the video source (Extended Data Fig. 8-1C2). 4. Right click the video source and select the Filters option (Extended Data Fig. 8-1C3). In the pop-up window, under the Effect Filters header, click the + button to add a User-defined shader (Extended Data Fig.  8-1C4), which can then be renamed to "Fisheye correction" for clarification (Extended Data Fig. 8-1C5). Note: if the User-defined shader is missing, repeat steps 1 and 2 to make sure the plugin is correctly installed. 5. Select the option to Load shader test from file and click Browse to locate and load the "fisheye.shader" text file. Adjust the power parameter to enhance or reduce fisheye effect (Extended Data Fig. 8-1C6). Note: a positive power adds more fisheye radial distortion while a negative power corrects distortion. A power of zero means no change to the original image. See Extended Data Figure 8-2 for example power settings. 6. In the main OBS Interface, press Start Recording to record a new video if a camera has been selected as the source or re-record a video with fisheye distortion corrected if a prerecorded video file has been selected as the source (Extended Data Fig. 8-1C7).

Position tracking and locomotor activity measurement
To assess the effects of fisheye distortion and digital correction on locomotor activity measurement, position tracking was performed on videos with low distortion, fisheye distortion, and distortion digitally corrected. Five videos of ;5-min duration of adult female Long-Evans rats exploring two contextually distinct compartments in a standard conditioned place preference apparatus (MED Associates) were acquired in OBS Studio from two Pi USB Cams in tandem. One camera was equipped with a 70°HFOV low-distortion lens and the other with a 100°HFOV fisheye lens (Arducam Technology Co, Limited; Extended Data Table 1-1). Videos with fisheye distortion were subsequently corrected using the method described above. Position and locomotor measures were tracked offline using ANY-maze software (Stoelting Co). Each measure obtained was averaged across all five videos and compared using a one-way ANOVA.
For real-time position tracking, a Pi USB Cam was directly interfaced with ANY-maze as a USB camera to provide live video feed.

Pi USB Cam offers superior video quality and performance
Of significant importance to our research needs was an affordable solution that performed better under low-/no-light conditions within limited object distance compared with commercially available webcams. Pi USB Cam outperformed the Logitech C930e webcam on several measures of significance to behavioral neuroscience research. Monitoring of a standard rat operant testing apparatus with a working area of 11.625" L Â 9.78" W Â 7.35" H was successfully accomplished with the Pi USB Cam mounted overhead using the out-of-the-box 170°(DFOV) Â 140°( HFOV) fisheye lens. Notably, in this orientation, the camera was placed ,1 cm above the testing apparatus therefore requiring essentially no additional vertical space for video acquisition (Fig. 7A). In contrast, the commercial counterpart, equipped with a 90°DFOV lens, necessitated ;20 cm of vertical space to capture a similar field of view. Moreover, positioning of the Pi USB Cam allowed for unobstructed access to the roof opening enabling unimpeded movement of tethered animals, whereas angled placement of the Logitech webcam was required to avoid collision with the commutator and tether.
Camera performance was qualitatively assessed under no-visible light condition (Fig. 7B), and several standard low-visible light conditions inside an operant box including cue light illumination (Fig. 7C), red house light illumination (Fig. 7D), white house light illumination (Fig. 7E), and white house light 1 cue light illumination (Fig. 7F). With adequate IR illumination, Pi USB Cam provided high image quality regardless of the level of the ambient visible light. In contrast, the Logitech webcam was unable to perform in the no-light condition and image quality was compromised under red house light illumination.
Pi USB Cam also outperformed the commercial webcam in terms of frame rate maintenance. Unlike the Pi USB Cam, the commercial webcam was unable to maintain the set frame rate of 20 FPS when interfacing with Synapse, losing on average 505.60 6 69.40 frames per min under white house light illumination and 702 6 0.00 frames per min under red house light illumination (Table  3). This resulted in video files that appear fast forward and shortened when played offline by third-party media players that assume a constant, nominal frame rate of 20 FPS (Movie 1). Pi USB Cam, however, maintained a constant 20 FPS across all trials under both lighting conditions. In contrast to Synapse, OBS Studio was able to achieve the maximum frame rate of 30 FPS with either camera without dropping frames, as determined by the ratio of total number of captured frames to recording length (data not shown). However, videos recorded with the commercial webcam exhibited a high degree of duplicate frames. On average, the webcam contained 1300.20 6 1.20 duplicate frames per min under white house light illumination and 1303.40 6 0.24 duplicate frames per min under red house light illumination (Table 4). As a result, these videos appear choppy during playback reflecting reduced temporal resolution despite the high frame rate achieved (Movie 2). Importantly, this analysis cannot distinguish between duplicate frames and instances when the animal does not move between two frames. However, it should be noted that videos acquired with Pi USB Cam were judged to be free of duplicate frames on visual inspection. A similar algorithmic analysis of these videos failed to uncover a clear threshold suggestive of duplicates, as their frequency distributions of maximum change in grayscale pixel intensity appear unimodal as opposed to bimodal. Nevertheless, when the same thresholds used for the commercial webcam were applied to these videos, an average of 466.20 6 43.24 frames per min were identified for videos under red house light illumination and 310.40 6 100.74 frames per min for videos under white house light illumination. Not only was the number of identified frames much lower (64-76%) than that observed with the commercial webcam, but this value also varied greatly from video to video. Frames identified as duplicates were also unevenly distributed unlike those observed in videos acquired using the commercial webcam. Combined with visual inspection, these suggest that the algorithmically identified frames in Pi USB Cam videos reflect a lack of animal movement rather than duplicate frames.

Pi USB Cam is highly customizable for individual recording needs
The recommended Arducam day-night vision camera can produce high quality video images under both bright and low-/no-light conditions with relative ease (Figs. 7,(9)(10)(11)Movies 3,4) and can also be permanently set to engage or disengage the IR filter (Fig. 3A). Moreover, the ability to physically separate the accompanying IR LEDs from the camera body allows for flexibility in illumination options under low-/no-light conditions (Figs. 4, 10D-F).
We also benchmarked several M12 sized lenses compatible with the Arducam day-night vision camera (see list in Extended Data Table 1-1), each of which is associated with a different field of view and accompanying degree of image distortion. Our research needs necessitate positioning of the Pi USB Cam off-center from the testing arena to accommodate tether access through the roof opening. However, this orientation has the potential to result in uneven image distortion across the field of view as distortion tends to be minimal at the center of the fisheye field of view and progressively enlarged toward the radial edge of the image (Clemensson et al., 2020). Therefore, comparison of different lenses was performed with the camera placed directly above the center of the arena at the level of the Plexiglas roof ;20 cm above the grid floor measuring 11.5" L Â 10.25" D Â 1.75" H. As shown in Figure 8, we found that the default lens (140°HFOV) struck the best balance between an adequate field of view to capture the majority of the area of interest and an acceptable amount of image distortion to enable accurate behavioral tracking. Comparison of the object distance required to obtain a similar field of view across different M12 fisheye lenses is provided in Extended Data Figure  8-2.
Using the default lens, Pi USB Cam was also able to capture a generous field of view when positioned for side viewing (at a distance of ;5 cm away from the Plexiglas wall; Fig. 9; Movie 3). However, in environments where object distance is not a limiting factor or position accuracy is a primary concern, users can opt for lenses with a narrower field of view and lower image distortion, or lenses designed specifically for minimum distortion (see Extended Data Table 1-1). For example, for overhead monitoring of a standard conditioned place preference testing apparatus with a working area of 32.70" L Â 8.25" W Â 8" H, we found that a 100°(HFOV) fisheye lens was ideally suited to capture the entire testing arena when the camera was positioned ;30 cm overhead ( Fig. 11; Extended Data Fig. 8-3; Movies 5, 6).

Fisheye distortion correction allows for accurate position tracking and locomotor activity measurement
As an alternative to using low-distortion lenses, which often require larger object distance that is incompatible with many experimental set ups, fisheye distortion can be digitally corrected using the OBS ShaderFilter plugin (Extended Data Figs. 8-1, 8-2, and 8-3; Movie 5) as well as other widely available image processing algorithms. Benchmarked against a low-distortion M12 lens, the fisheye image distortion from a 100°HFOV lens and digital fisheye correction imparted minimal effects on position tracking efficiency (Extended Data Fig. 8-3A-C; Movie 5). One-way ANOVA of various locomotor measures revealed a significant between group difference for distance traveled in the white side of the testing apparatus (F (2,4) = 6.538, p = 0.0462; Extended Data Fig. 8-3D). However, post hoc comparisons failed to identify a significant difference between precorrection and low-distortion (Dunnett's test, p = 0.0918) or postcorrection and low-distortion (Dunnett's test, p = 0.0783). While no other statistically significant differences were observed, visual inspection of the data makes clear that the difference between precorrection and low-distortion is lessened by digital correction. Therefore, digital fisheye correction is likely to improve tracking accuracy in experiments where absolute, rather than relative, measures are a high priority.

Custom 3D-printed components afford flexible installation options tailored to individual needs
Our custom 3D-printed components (Table 2) provide added protection and allow Pi USB Cam to be securely mounted on a variety of surfaces or structures. Using these components, Pi USB Cam can be installed on a commutator balance arm post (Fig. 9A,C), a ring stand (Fig. 10A), a wire shelf (Figs. 10D, 11A), or any flat surface (Fig. 9B,D). Moreover, the hinged camera mount components provide ample degrees of freedom to finely adjust camera positioning for optimal image acquisition. Thus, use of our custom 3D-printed components ensures Pi USB Cam is readily adaptable to virtually any recording condition. This includes recording within the limited space around a standard operant box inside a sound attenuating cabinet ( Fig. 9; Movie 3), and many other common behavioral testing sites such as a home cage environment ( Fig.  10; Movie 4) or larger scale testing arenas like a conditioned place preference (CPP) apparatus (Fig. 11).
Pi USB Cam is highly scalable for multisubject multisite recording needs Using the free, open-source software, OBS Studio, we were able to successfully video-monitor in real-time eight freely behaving rats in separate operant boxes with eight Pi USB Cams operating simultaneously from a single host computer (Fig. 12A). Acquired recordings were subsequently saved into individual video files for further offline analysis. Using our settings (Fig. 5D), a 29-min recording produced on average 738.5 MB worth of video file in mkv format (data not shown). At any given moment, each instance of OBS Studio used between 3-6% of the six-core 3.20 GHz CPU on our host computer and the overall CPU usage never exceeded 50% when no other major application was running at the same time (Fig. 12B,C). Thus, the only major limiting factors to scaling Pi USB Cam are the USB bandwidth and CPU of the host computer.

Discussion
Pi USB Cam is an affordable, DIY video recording solution that combines simple electronics and 3D printing to enable video monitoring of behavior in diverse recording environments under any lighting condition. Our detailed build instructions are easy to follow and require no specialized expertise. Camera components and accessories are easily sourced from globally accessible vendors. In addition, because DIY components are often discontinued, we provide an extensive list of alternatives that can serve as substitutes if circumstances require. Using this design, we demonstrate its ease of use as a plug-andplay USB camera, with notable superiority over a generic commercial webcam in terms of field of view, IR sensitivity, frame rate, and overall flexibility to meet individual research needs. Lastly, we show that Pi USB Cam, in combination with free, open-source video acquisition software, is easily scalable for multisubject and multisite recordings.
Hardware flexibility is the main advantage of Pi USB Cam over commercially available webcams. Indeed, despite their out-of-the-box user friendliness, the hardware of most commercial webcams is simply not designed to acquire high quality video recordings in confined spaces and/or poor lighting conditions. In contrast, Pi USB Cam  At the heart of the software design, the Show-me webcam firmware enables Raspberry Pi cameras to become an out-of-box, plug-and-play USB camera. In contrast, most, if not all, other DIY solutions require some level of programming proficiency to unlock even the most basic functionalities of the microcomputer and related hardware (Saxena et al., 2018;Singh et al., 2019;Weber and Fisher, 2019;Clemensson et al., 2020;Centanni and Smith, 2021). While DIY solutions tend to turn researchers away because of the steep learning curve associated with such requirements, Pi USB Cam ensures novices need not reinvent the wheel. As a USB camera, Pi USB Cam can be readily integrated into preexisting video recording systems, ranging from the popular open-source recording/streaming app OBS Studio to the professional neural/behavioral recording software Synapse. As such, limited only by the specifications of the host recording computer, Pi USB Cam can be readily scaled to facilitate multisubject, multisite, high-throughput experimental designs. Depending on the recording software of choice, users of Pi USB Cam have the option to watch live video feeds for real-time behavioral monitoring on screen or via network streaming. Users also have the freedom to save video recordings in their preferred file format and location where they can be accessed for offline analysis using sophisticated behavioral tracking algorithms such as ezTrack (Pennington et al., 2019) and DeepLabCut (Mathis et al., 2018). For real-time applications that rely on live video feeds, Pi USB Cam is again able to integrate as a plugand-play USB camera with powerful, yet widely available neuroscience research applications like ANYmaze, which combine real-time video tracking with various control devices (e.g., optogenetic laser) to achieve  closed-loop neural/behavioral manipulation (see Movie 6 for example). Although not tested here, Pi USB Cam is also compatible with recently developed open-source video tracking and control systems, which readily support USB cameras including DeepLabCut-Live (Kane et al., 2020) and DeepLabStream (Schweihoff et al., 2021).
While Pi USB Cam offers an easy-to-use, flexible, and affordable option for behavioral neuroscientists to acquire video recordings, several limitations of our system should be noted. Previous approaches using Raspberry Pi cameras are designed such that the microcomputer performs the heavy lifting of video encoding (Saxena et al., 2018;Singh et al., 2019;Weber and Fisher, 2019;Clemensson et al., 2020;Centanni and Smith, 2021). The advantage of such a design is that it has no inherent limits on scalability other than the number of cameras an individual  . Camera implementation for home cage monitoring. Custom 3D-printed components can also be configured to accommodate home cage recordings in a variety of settings including mount to a ring stand (A-C) for recordings performed offsite (e.g., testing room) and on wire shelving (D-F) for recordings inside the vivarium. For each option (1) depicts the CAD drawing of 3Dprinted components. Close up photographs of each configuration are provided in (2) with object distance indicated in (3). Pi USB can purchase. In contrast, Pi USB Cam requires a standalone host computer to perform recordings. In addition to this expense and the footprint associated with a host computer in close proximity to behavioral testing apparatuses, the number of video streams that can be encoded and stored in tandem is limited by the host computer's processing power and bandwidth. While such a configuration could be problematic for some research needs, we felt that function similar to a commercial webcam makes Pi USB Cam more user friendly and approachable than standalone alternatives. Moreover, connection with a host computer provides significantly greater ease of real-time video monitoring than standalone Raspberry Pi cameras. While Show-me webcam is more than adequate for video recording, it does not currently support audio capture. Although this could change in a later release of the firmware, currently video recordings requiring audio measures (e.g., ultrasonic vocalizations, audio cues) necessitate use of a standalone microphone to capture audio independent of video feed, which can then be synchronized online or offline depending on the recording software used. As with any open-source DIY solution, Pi USB Cam software will require regular and timely updates in tandem with upgrades to Raspberry Pi hardware and operating system. However, the well-established open-source community of Show-me webcam developers has a strong history of providing regular support and upgrades for the firmware. This is unlike previous niche DIY video solutions, which have often relied on a single individual to perform upgrades and consequently have had difficulty surviving beyond initial hardware and operating system versions (Singh et al., 2019). Finally, our custom 3D-printed case and mount tools require access to a quality 3D printer. However, as the technology of 3D printing matures, small-scale consumer printers are becoming increasingly more affordable, and larger-scale industrial fee-for-service printers are often offered at universities/institutions.
In summary, Pi USB Cam is a highly versatile and affordable DIY video recording solution for real-time behavioral monitoring and offline analysis. Requiring minimum time, expertise, and financial commitment to implement, Pi USB Cam offers behavioral neuroscientists a powerful, yet simple, solution for high quality and high-throughput behavioral data collection. We encourage users to reference the current manuscript when implementing Pi USB Cam in their own experiments. Figure 11. Camera configuration for large apparatus video recordings. Together with custom 3D-printed components, Pi USB Cam can be configured to record from large behavioral testing arenas like the conditioned place preference apparatus shown here. A1, CAD drawing of the 3D-printed components used in this configuration. A2, Close-up photograph of Pi USB Cam mounted overhead on a wire shelf. A3, The entire testing arena is visualized overhead with the camera mounted at a distance of ;30 cm using a 100°HFOV fisheye lens. Video snapshots acquired with this configuration under bright-light (B) and no-light (C) conditions. continued Cam is easily mounted to a ring stand (A) to accommodate offsite recordings in the home cage shown here in both bright-light (B) and no-light (C) conditions. Note that in this configuration the IR LEDs are used in the default configuration such that they are attached to and powered directly by the main camera board. Using the custom 3D-printed G clamp, Pi USB Cam (and independent IR LEDs) can also be mounted directly to shelving (D) shown here in both bright-light (E) and no-light (F) conditions. Figure 12. Pi USB Cam scalability is limited only by USB bandwidth and host computer specifications. A, Multisubject multisite recordings are easily achieved using the free and open-source video capture software OBS Studio depicted here with eight independent Pi USB Cams installed in eight separate operant boxes. B, Windows task manager shows the CPU usage of each instance of OBS Studio during multicamera recordings (n = 8) using the same video acquisition settings shown in Figure 5D and a host computer with the specifications depicted in C, including CPU, memory (1), and add-on USB controllers from a USB PCIe expansion card (2).