Visual Overview
Visual Abstract
Abstract
An exciting aspect of neuroscience is developing and testing hypotheses via experimentation. However, due to logistical and financial hurdles, the experiment and discovery component of neuroscience is generally lacking in classroom and outreach settings. To address this issue, here we introduce RetINaBox: a low-cost open–source electronic visual system simulator that provides users with a hands-on tool to discover how the visual system builds feature detectors. RetINaBox includes an LED array for generating visual stimuli and photodiodes that act as an array of model photoreceptors. Custom software on a Raspberry Pi computer reads out responses from model photoreceptors and allows users to control the polarity and delay of the signal transfer from model photoreceptors to model retinal ganglion cells. Interactive lesson plans are provided, guiding users to discover different types of visual feature detectors—including ON/OFF, center-surround, orientation-selective, and direction-selective receptive fields—as well as their underlying circuit computations.
- center-surround
- direction selectivity
- discovery
- education and outreach
- learning tool
- open-source
- orientation selectivity
- receptive field
- visual neuroscience
Significance Statement
RetINaBox represents a new conceptual way to teach high-level neuroscience ideas, via the joy of discovery. It provides users with an interactive, hands-on system in which to discover how the brain implements visual feature-selective computations like center-surround, orientation selectivity, and direction selectivity. RetINaBox recreates the experience of being an experimental visual neuroscientist by letting users discover the visual stimuli that activate model neurons and the circuits that enable such feature-selective responses.
Introduction
The manner in which the brain encodes sensory stimuli varies across brain areas and is often far from predictable (Hubel and Wiesel, 2005; Gefter, 2015; Trenholm and Krishnaswamy, 2020). Thus, to discover how sensory inputs are represented in the brain, we need to record from neurons while providing sensory stimulation. A challenge in neuroscience education and outreach is how to incorporate such experimental work into lesson plans, so that instead of solely learning lists of facts, students get hands-on experience that captures the excitement of discovery. Laboratory classes have long been used to address this challenge, but their scope is often limited by financial, infrastructural, technical, ethical, and training constraints. For example, in vivo single-unit recordings from animal brains during sensory stimulation are widely used to capture the neural code but are too difficult to recapitulate in pedagogical settings.
To this end, we developed RetINaBox as a neuroscience educational/outreach tool that provides users with an interactive, hands-on system with which to discover how the visual system implements several important feature-selective computations that were originally discovered through single-cell neurophysiological recordings. RetINaBox consists of a low-cost computer, simple electronic components (LEDs and photodiodes), 3D-printed parts, and custom-written open–source software. Through several lesson plans, RetINaBox exposes users to numerous computations in the visual system, including ON/OFF processing (Hartline, 1938; Famiglietti and Kolb, 1976; Slaughter and Miller, 1981), center-surround receptive fields (RFs; Barlow, 1953; Kuffler, 1953), orientation selectivity (Hubel and Wiesel, 1959, 1962), and direction selectivity (Barlow et al., 1964; Barlow and Levick, 1965). RetINaBox also includes Discovery Mode, which recreates the experience of being an experimental visual neuroscientist: users load a preset model neuron into RetINaBox but are not shown the wiring scheme of its inputs; users then need to test different visual stimuli until they discover the specific stimulus that activates the neuron; finally, users are tasked with discovering the circuit wiring scheme that underlies the neuron's stimulus selectivity.
Materials and Methods
The user manual (see Extended Data 1 or GitHub) provides a parts list and guide for building RetINaBox. RetINaBox uses a Raspberry Pi 500 computer, with easy access to GPIO pins for sending and receiving signals. We provide custom-written open–source software for running RetINaBox (see Extended Data 1 or GitHub). At the time of publication, the cost for all components, including a Raspberry Pi and a monitor, was <$350 USD. The build time is ∼4 h.
Data 1
Download Data 1, ZIP file.
Results
Simplified model retina
To build RetINaBox, we sacrificed some biological details, since our main goal was to introduce users to general principles without requiring them to first learn about specific implementations of these feature-selective computations in specific cell types of the visual system. As such, and as outlined in more detail below and in Figure 1, A and B, RetINaBox does not include model horizontal cells, bipolar cells, or amacrine cells. To provide a specific example of the pedagogical philosophy behind RetINaBox: instead of having five separate lessons outlining the exact biological details behind how center-surround is differentially implemented in photoreceptors, horizontal cells, bipolar cells, amacrine cells, and ganglion cells, RetINaBox includes a single lesson focused on the general concept of center-surround.
“RetINaBox design and GUI overview.” A schematic of retinal circuitry (A) and the simplified model circuitry of RetINaBox (B). C, An overview of RetINaBox, including the 3D-printed case, the visual stimulator (3 × 3 LED array), and photoreceptor array (3 × 3 photodiode array), along with the connectivity port between RetINaBox and the Raspberry Pi, and the output signals of the two model RGCs (2 color LEDs on top of RetINaBox and 3.3 V output pins on the back of RetINaBox). D, The RetINaBox GUI, showing the Stimulus Controller, Connectivity Manager, and Signal Monitor. E, The Connectivity Manager pop-up window, which allows users to set the connectivity (silent, +, or −) and delay (none, short, medium, or long) of the connection between each of the nine model photoreceptors and each of the two RGCs. The Connectivity Manager allows users to set the RGC threshold (i.e., how many + inputs an RGC needs to receive to respond) and type (ON or OFF). F, Methods for monitoring RGC activity, either by watching the RGC responses in the GUI's Signal Monitor, or by monitoring the yellow and green LEDs on the top of RetINaBox, or by connecting external electronic devices to the 3.3 V output pins on the back of RetINaBox.
Seeing starts in photoreceptors. In the vertebrate retina, photoreceptors form a single layer of cell bodies and are spatially organized as a lattice, such that the light-receptive outer segments of neighboring photoreceptors detect changes in light intensity in neighboring regions of visual space (Ahnelt, 1998; Ahnelt and Kolb, 2000). To model photoreceptors, we used photodiodes—semiconductor diodes sensitive to changes in photon flux. To model an array of photoreceptors, we built a 3 × 3 array of photodiodes (Fig. 1B,C). This is the smallest photoreceptor array that can implement all the feature-selective computations we sought to explore with RetINaBox: ON/OFF, center-surround, orientation selectivity, and direction selectivity.
Next, while the vertebrate retina is a complex tissue comprised of multiple cell types located in specific anatomical layers (Krishnaswamy and Trenholm, 2023; Fig. 1A), here we designed a simplified model retina whereby the array of nine model photoreceptors can be connected to two model retinal ganglion cells (RGCs; Fig. 1B). Users should note that this is a simplification—in the actual vertebrate retina photoreceptors do not directly connect to RGCs (Fig. 1A). The contribution from other retinal neurons to visual computations is incorporated into RetINaBox's connectivity functions that transform signals passing from photoreceptors to RGCs. The photodiodes are powered by the Raspberry Pi, and the output of each photodiode is sampled by a different Raspberry Pi GPIO pin.
Visual stimulation
So that each photodiode can be independently activated, we aligned the photodiode array with a 3 × 3 LED array (Fig. 1C), with each LED being independently controlled by its own Raspberry Pi GPIO pin. To ensure that photodiode activation is not modulated by variations in ambient light levels, we used infrared (IR) stimulation LEDs and IR-sensitive photodiodes. However, so that users can see the pattern and timing of RetINaBox's stimulus LEDs (the 940 nm IR LEDs being invisible to the human eye), we added a second 3 × 3 array of visible (white light) LEDs pointing upward (i.e., away from the photodiodes; Fig. 1C).
Graphical user interface
To control LED stimulation, monitor model photoreceptor responses, connect model photoreceptors to model RGCs, and monitor RGC output responses, we designed a graphical user interface (GUI) to control RetINaBox (Fig. 1D,E). The GUI has three panels: (1) Visual Stimulus Controller; (2) Connectivity Manager; and (3) Signal Monitor (Fig. 1D).
The Visual Stimulus Controller (Fig. 1D) allows the user to independently control activation of each LED in the 3 × 3 array. Visual stimuli can be displayed in a static manner or can be made to move either leftward or rightward at three different speeds. Visual stimuli can also be delivered by turning on all stimulus LEDs and then placing shapes in between the stimulation LEDs and photodiodes to deliver specific patterns of light to the photoreceptor array (see the user manual for details).
The Connectivity Manager (Fig. 1D,E) allows users to connect the output from each model photoreceptor to a model RGC. The GUI features two model RGCs. By clicking on a model RGC (Fig. 1D), a pop-up allows users to modify the connectivity of each photoreceptor to that RGC (Fig. 1E). Each model RGC can receive input from all nine model photoreceptors. For each connection, the polarity (silent, +, or −, which corresponds to the photoreceptor sending a 0, +1, or −1, respectively, to the RGC) and time delay (none, short, medium, long) can be adjusted. The response threshold for each RGC can be set between 1 and 9, indicating how many + photoreceptor inputs need to be received for the RGC to respond. Additionally, RGCs can be set as either ON or OFF type (Fig. 1D), modeling ON/OFF retinal processing. ON RGCs receive signals from photoreceptors that are being stimulated by light. OFF RGCs receive signals from photoreceptors that are not being stimulated by light. RGC output is binary: ON or OFF.
The Signal Monitor (Fig. 1D) plots when each model photoreceptor (i.e., photodiode) is activated by the visual stimulus, the polarity of the signal transfer from each photoreceptor to each model RGC, and each model RGC's output response (Fig. 1D,F). In addition, the output signals of the two model RGCs are indicated by two colored LEDs (RGC1, green; RGC2, yellow) on the top of RetINaBox (Fig. 1C,F). Furthermore, the output of the RGCs can be used to drive external electronic components via output pins on the rear of RetINaBox that send out 3.3 V signals when a given model RGC is activated (Fig. 1C,F).
Lesson 1—explore ON/OFF processing and crack a code with center-surround RFs
Following experiments that described light-evoked spiking in the optic nerve (Adrian and Matthews, 1927), RGCs were found to have spatially localized RFs (i.e., they only “see” within a small part of the visual scene), with some RGCs responding specifically to either increases or decreases in luminance over their RF (Hartline, 1938). Such ON versus OFF responses were subsequently found to arise in bipolar cells, with “sign-conserving” OFF bipolar cells possessing ionotropic glutamate receptors in their dendrites and “sign-inverting” ON bipolar cells possessing metabotropic glutamate receptors in their dendrites (Nelson and Connaughton, 1995; Krishnaswamy and Trenholm, 2023). ON/OFF processing helps with contrast sensitivity and enables robust detection of both increases and decreases in luminance (Schiller et al., 1986).
It was also discovered that RGCs care about the pattern of visual stimuli falling within their RFs, due to a center-surround RF organization (Barlow, 1953; Kuffler, 1953; Trenholm and Krishnaswamy, 2020). For an ON-center RGC, increasing luminance with a spot of light located directly above the cell—usually corresponding to the location of its cell body and most of its dendritic tree—maximally activates the cell. However, if the size of the visual stimulus was increased beyond the RF center, the cell's response decreases due to activation of an inhibitory surround. While first described in RGCs, surround inhibition was subsequently described at each level of retinal processing—in photoreceptors (Baylor et al., 1971), horizontal cells (Kawai, 2022), bipolar cells (Werblin and Dowling, 1969), and amacrine cells (Nelson, 1982)—via inhibitory lateral connections from horizontal cells and amacrine cells. Center-surround RFs mean that RGCs are optimized to respond to local luminance contrast (Trenholm and Krishnaswamy, 2020)—meaning that most visual neurons are not strongly activated by homogeneous, spatially redundant scenes—with the RF size relating to the optimal spatial frequency of luminance contrast that activates a cell.
Lesson 1 of RetINaBox has users explore ON/OFF and center-surround RFs. The first goal is to connect the photoreceptors to RGC1 so that it will respond when a single model photoreceptor is activated, but not when that photoreceptor is activated at the same time as other photoreceptors in the array (Fig. 2A–D; e.g., generate an ON-center RGC that is selective to a small spot of light centered on the middle of the photoreceptor array). The next goal is to generate a second RGC with the same feature selectivity but located in a different part of the visual field (i.e., users will have two ON-center RGCs with RF centers in different locations). By generating two RGCs with RFs centered in different parts of the visual field, RetINaBox elucidates the point that an individual photoreceptor can contribute to distinct parts of different downstream neurons’ RFs (in this example, the photoreceptor that is responsible for one RGC's RF center contributes to the other RGC's RF surround). The next goal involves switching one of the RGCs from ON-center to OFF-center. This will help users learn about how the visual system differentially processes increases versus decreases in luminance (Fig. 2A–D). Finally, users are tasked with generating two ON-center RGCs with the same RF center location but with preferences for spots of different sizes (Fig. 3A–D). This will show users how the RF size controls the spatial frequency of luminance contrast that activates each RGC. For all lessons, if users need assistance solving the activities, they can consult the lesson plan document or check out example Connectivity Manager solutions which can be loaded as presets from the “Lessons” tab in the menu bar. In addition, the GUI allows users to save their solutions and access them later.
Lesson 1—ON versus OFF processing. A, Example circuits for generating RGCs with center-surround ON (left) and OFF (right) RFs. Example settings for the Connectivity Manager (B) and the Visual Stimulus Controller (C). D, Example Signal Monitor output for the settings outlined above, with the stimulus LEDs turned on for 3 s (small light spot, left; small dark spot, right).
Lesson 1—center-surround RFs. A, Example circuits for ON center-surround RFs with preferences for small (left) and large (right) spots. Example settings for the Connectivity Manager (B) and the Visual Stimulus Controller (C). D, Example Signal Monitor output for the settings outlined above, with the stimulus LEDs turned on for 3 s (small spot, left; large spot, right).
Lesson 1 ends with a code breaking game, which gets users to apply concepts related to center-surround RFs to solve puzzles. Upon selecting a code breaking challenge, users are provided with a code in the form of a series of visual stimuli, with each visual stimulus representing a letter from the alphabet (Fig. 4). Users are also provided with a cipher containing information they need to solve the problem (Fig. 4). The cipher indicates what the feature preferences should be for the two model RGCs and which letters of the alphabet correspond to neither RGC being activated, one or the other RGC being activated in isolation, or both RGCs being activated together. Once the Connectivity Manager is set according to the cipher, users present RetINaBox with the indicated visual stimuli. Based on the output of the RGCs to each visual stimulus, users enter the corresponding letters into the GUI to solve the code (Fig. 4).
Lesson 1—code breaker activity with center-surround RFs. A, A visualization of the code breaker GUI tab related to Lesson 1. Users must use the cipher (right) to correctly set up the Connectivity Manager for the two RGCs to obtain the indicated feature-selective responses. Next, users place the visual stimuli indicated on the left into RetINaBox and use the cipher instructions (bottom right) to transcribe the RetINaBox output into the correct letter for each visual stimulus in the code.
Lesson 2—build a shape detector with orientation-selective RFs
Despite primary visual cortex (V1) receiving its main sensory input from RGCs—via LGN relay neurons that also tend to possess center-surround RFs (Hubel and Wiesel, 1961; Singer and Creutzfeldt, 1970)—recordings showed that most V1 neurons do not possess center-surround RFs. Instead, many V1 neurons exhibit orientation-selective tuning (Hubel and Wiesel, 1962, 1968), meaning that they are optimally activated by an extended edge (or line) in a specific part of the visual field, aligned in a specific orientation. It was posited that such orientation-selective responses arise when a single V1 neuron pools inputs from multiple center-surround LGN neurons whose RFs are spatially offset along a line (Hubel and Wiesel, 1962; Angelucci and Trenholm, 2024). While early work in cats and primates found that orientation selectivity appears first in V1 (Hubel and Wiesel, 1962, 1968), subsequent work in other species, including rabbits (Levick, 1967) and mice (Baden et al., 2016; Nath and Schwartz, 2016), found that orientation selectivity can also arise at the level of RGCs. Such orientation-selective RFs are thought to be an efficient way to sample visual statistics in the natural world (Olshausen and Field, 1996; Coppola et al., 1998). Orientation-selective signals can then be combined in downstream neurons in various ways, for instance resulting in spatially invariant orientation-selective cells (i.e., a complex cell that receives input from multiple simple cells, with the simple cells all being tuned to the same orientation but exhibiting RFs that are spatially offset from one another; Hubel and Wiesel, 1962) and in cells that respond to combinations of edges/lines, which can in turn serve as building blocks for visual object detectors (Hesse and Tsao, 2020).
Lesson 2 of RetINaBox begins by asking users to build an RGC with an orientation-selective RF. The first goal is to connect the photoreceptors to RGC1 in such a way that it will respond when a specific set of three adjacent photoreceptors are activated (i.e., a three-photodiode-long line) but not when any other arrangements of photoreceptors are activated (Fig. 5A,B). Then, users are asked to generate a second orientation-selective cell with a preference for the same line location as RGC1 but with a preference for a dark line. Next, users are tasked with setting up RGC2 to exhibit a preference for a line of a different orientation than RGC1 (Fig. 5A,B). The next goal is to generate two RGCs with preferences for lines of the same orientation but of different thicknesses. Then, users are tasked with generating two orientation-selective cells with preferences for a bright line of the same orientation and same spatial location but of different lengths. This helps users learn about the concept of end-stopping (Hubel and Wiesel, 1965), which is thought to be important for enabling encoding of high curvature features in visual scenes (Ponce et al., 2017).
Lesson 2—shape detector activity with orientation-selective RFs. A, Example circuits for generating RGCs with orientation-selective RFs but preferring lines of different orientations. B, Example settings for the Connectivity Manager. C, Demonstration of the shape detector activity: the settings in B were entered into the Connectivity Manager. A shape corresponding to that created by joining the preferred stimuli of RGC1 and RGC2 was made with the Visual Stimulus Tool, which was then placed in the correct spatial location between RetINaBox's stimulation LEDs and photoreceptor array. Both RGCs are activated at the same time: both yellow and green LEDs are activated.
Lesson 2 ends with users being asked to build a shape detector, which gets users to apply concepts related to orientation-selective RFs to solve the problem. First, users need to optimize two ON RGCs to be selective for lines of different orientations at specific locations over the photoreceptor array, with the combination of the two lines representing a shape (Fig. 5C). Then, users are instructed in basic electronics to connect the digital outputs of the two RGCs (3.3 V outputs located on the rear of RetINaBox) to generate a buzzer that is activated via an AND gate, meaning that both RGCs need to be activated for the buzzer to sound. This is meant to be analogous to the way that some neuroscientists work when they listen to their experiments in real time by playing their electrophysiological recordings through a speaker.
Lesson 3—play a video game with direction-selective RFs
Hubel and Wiesel found that some V1 cells, aside from being orientation selective, were also direction selective (Hubel and Wiesel, 1962). Soon afterward, direction-selective RGCs were described in the rabbit retina (Barlow et al., 1964; Barlow and Levick, 1965). Direction-selective RGCs were subsequently found in a variety of species, including mice (Weng et al., 2005; Sun et al., 2006) and primates (Kim et al., 2022; Wang et al., 2023). Direction-selective visual responses have been particularly well studied in the fly visual system (Borst et al., 2020). Direction-selective responses can help to stabilize eye/head movements with respect to the visual scene (Oyster et al., 1972; Yoshida et al., 2001; Yonehara et al., 2016) and help an animal distinguish between external and self-generated movements in the visual scene (Britten, 2008; Sabbah et al., 2017). Direction-selective responses can arise from spatially offset excitation/inhibition and spatially asymmetric delays in inputs along the preferred-null axis (Mauss et al., 2017).
Lesson 3 of RetINaBox asks users to build RGCs with direction-selective RFs. The goal is to connect the photoreceptors to an RGC so that it will respond when the user moves a visual stimulus (e.g., the user's hand) in one direction over the photoreceptor array, but not when the same visual stimulus is moved in the opposite direction (Fig. 6A,B). The next goal is to generate a second RGC with a preference for motion in the opposite direction (Fig. 6A,B). Next, users are instructed to generate two RGCs, each with a preference for the same visual stimulus moving in the same direction, but at different speeds. This exposes users to the concept of temporal frequency tuning.
Lesson 3—block breaker video game with direction-selective RFs. A, Example circuits for generating direction-selective RGCs with preferred directions for leftward (left) and rightward (right) motion. B, Example settings for the Connectivity Manager. C, The RetINaBox GUI can be used to load a block breaker video game that takes its left and right input commands from the activity of RGC1 and RGC2, respectively.
Lesson 3 ends with users building a virtual brain–computer interface, which tasks users with generating robust direction-selective detectors and applying the output of RetINaBox’s RGCs to control a video game, via left and right commands (Fig. 6).
Lesson 4—discovery mode: what does RetINaBox want to see?
To replicate the discovery aspect of scientific exploration where neuroscientists try to figure out what visual stimuli drive visual neurons, Lesson 4 tasks users with discovering the visual features that drive mystery RGCs in RetINaBox and uncovering the circuits that underlie their feature-selective responses (Fig. 7A–C). Users start by selecting a mystery RGC from a drop-down list in the GUI. In Phase 1, users are tasked with discovering which visual stimulus activates the mystery RGC. Upon identifying the preferred visual stimulus for a mystery RGC, users can verify in the software that they have correctly identified the target visual stimulus (Fig. 7D). In Phase 2, users must discover the connectivity settings that enable the mystery RGC to be selective for the target visual stimulus they just discovered (Fig. 7D).
Lesson 4—discovery mode. Users select a mystery ganglion cell (A), test various visual stimuli to discover the preferred stimulus of the mystery ganglion cell (B), and discover the circuit connectivity that generates such a feature selectivity (C), all using the Discovery Mode GUI tab (D).
Discussion
Taking inspiration from groups that developed interactive learning tools for neuroscience education (Dragly et al., 2017; Gage and Marzullo, 2022) and that made electronic models of the visual system (Delbrück and Liu, 2004; Li et al., 2010), we developed RetINaBox. However, unlike these previous systems, we tailored RetINaBox to let users learn about several concepts in visual neuroscience—ON/OFF, center-surround, orientation-selective, and direction-selective feature-selective tuning properties—through the act of exploration and discovery. Our goal was to recreate, in a classroom setting, the lab experience of discovering the specific visual stimuli that activate visual neurons.
RetINaBox is an electronic visual stimulation/detection device paired with a computer. RetINaBox comes with four detailed lesson plans in which users are guided to develop and test hypotheses while working toward circuit models for different feature detectors. While going through the lessons and building/testing circuits, users learn important concepts in neuroscience, including excitatory and inhibitory synaptic connectivity, response thresholds, spatiotemporal processing, and parallel processing.
It is important to note that RetINaBox represents a simplification of actual biological circuits (e.g., we bypass bipolar/horizontal/amacrine cells in the retina and replace them with sign, delay and ON/OFF functions). Nonetheless, we believe that RetINaBox provides a useful heuristic for approximating visual neuroscience experiments for educational purposes insomuch as it allows users to focus on general concepts related to feature-selective tuning preferences. These concepts can then be applied toward understanding how such feature-selective computations are implemented in various ways in different parts of the visual system. To ensure that RetINaBox users are not left with an incorrect understanding of how the actual visual system works, the lesson plans include details about the biological circuits that underlie the various visual feature detectors covered in Lessons 1–3.
We designed RetINaBox for neuroscience outreach events, whether it be with high school or undergraduate students, or the general public. Additionally, while someone who is specifically studying circuit processing in the retina or visual cortex may find RetINaBox simplistic, we believe it can be a useful tool for conveying concepts in visual processing to graduate students working in molecular, genetic, and clinical aspects of vision, who often do not have backgrounds in circuits and systems neuroscience. To aid in its use, in addition to lesson plans, we also provide teaching slides (see Extended Data 1 or GitHub), meaning that RetINaBox can be easily incorporated into existing neuroscience classes or quickly set up as an outreach activity.
Seeing as RetINaBox is designed from simple electronic components (photodiodes and LEDs) and a few 3D-printed parts, it can be further expanded or altered by users for their own specific use cases. Along these lines, while RetINaBox uses most of the Raspberry Pi's GPIO ports, several remain unused and could be leveraged for additional purposes as users see fit. Furthermore, while we tried to keep RetINaBox as simple as possible while also providing a versatile system that could be used to explore the topics of ON/OFF, center-surround, orientation-selective, and direction-selective processing, the GUI software is open-source and thus can be edited as users see fit if they wish to add new features. As one example, if a user wanted to include additional details about temporal processing, they could add time constant variables to enable responses to be either sustained or transient. As another example, if a user wanted to add more RGCs to better highlight parallel processing and population codes, this would be possible. As such, while RetINaBox is a powerful learning tool as is, it could be expanded upon in various ways as users see fit.
Data Availability
Custom code, 3D-print files, the user manual, and the lesson plans can be found here: https://github.com/Trenholm-Lab/RetINaBox.
Footnotes
The authors declare no competing financial interests.
We thank A. Krishnaswamy for critical discussions; H. Velde for noting that “retina in a box” featured “ina in a,” which prompted us to shorten the name from Retina-In-A-Box to RetINaBox; and the following funding sources: a Vision Science Research Network Merit Scholarship to F.A.A.; a National Sciences and Engineering Research Council of Canada (NSERC) and Fonds de Recherche du Québec (FRQ) scholarship to E.C.; a Canadian Institutes of Health Research (CIHR) PhD fellowship to M.Q.L.; an NSERC Undergraduate Summer Research Award, a Tanenbaum Open Science Institute Launchpad award, and an NSERC graduate student scholarship to M.L.; a Tomlinson PhD fellowship to M.W.; an NSERC Discovery Grant (RGPIN-2020-05105; Discovery Accelerator Supplement, RGPAS-2020-00031) and CIFAR Canada AI Chair (Learning in Machines and Brains Fellowship) to B.A.R.; and a Canada Research Chair, an Alfred P. Sloan Foundation Research Fellowship, and NSERC Discovery Grants (RGPIN-2018-03852 and RGPIN-2025-05567) to S.T. We also acknowledge outreach funding from Brain Canada, the Vision Science Research Network, and the Canadian Association for Neuroscience.
↵*B.B., F.A.A., and E.C. contributed equally to this work.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.














