Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

Machine Learning-Based Pipette Positional Correction for Automatic Patch Clamp In Vitro

Mercedes M. Gonzalez, Colby F. Lewallen, Mighten C. Yip and Craig R. Forest
eNeuro 26 July 2021, 8 (4) ENEURO.0051-21.2021; DOI: https://doi.org/10.1523/ENEURO.0051-21.2021
Mercedes M. Gonzalez
Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, GA 30332
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mercedes M. Gonzalez
Colby F. Lewallen
Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, GA 30332
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mighten C. Yip
Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, GA 30332
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Craig R. Forest
Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, GA 30332
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Craig R. Forest
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Visual Abstract

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Abstract

Patch clamp electrophysiology is a common technique used in neuroscience to understand individual neuron behavior, allowing one to record current and voltage changes with superior spatiotemporal resolution compared with most electrophysiology methods. While patch clamp experiments produce high fidelity electrophysiology data, the technique is onerous and labor intensive. Despite the emergence of patch clamp systems that automate key stages in the typical patch clamp procedure, full automation remains elusive. Patch clamp pipettes can miss the target cell during automated experiments because of positioning errors in the robotic manipulators, which can easily exceed the diameter of a neuron. Further, when patching in acute brain slices, the inherent light scattering from non-uniform brain tissue can complicate pipette tip identification. We present a convolutional neural network (CNN), based on ResNet101, to identify and correct pipette positioning errors before each patch clamp attempt, thereby preventing the deleterious effects of and accumulation of positioning errors. This deep-learning-based pipette detection method enabled superior localization of the pipette within 0.62 ± 0.58 μm, resulting in improved cell detection success rate and whole-cell patch clamp success rates by 71% and 59%, respectively, compared with the state-of-the-art cross-correlation method. Furthermore, this technique reduced the average time for pipette correction by 81%. This technique enables real-time correction of pipette position during patch clamp experiments with similar accuracy and quality of recording to manual patch clamp, making notable progress toward full human-out-of-the-loop automation for patch clamp electrophysiology.

  • automated
  • CNN
  • deep learning
  • electrophysiology
  • machine learning
  • patch clamp

Significance Statement

The patch clamp technique, while difficult and time intensive, remains necessary for fully elucidating individual neuron behavior. This deep-learning based method for pipette correction will improve the yield and speed of automated patch clamp experiments, enabling higher throughput and real-time pipette correction during fully automated patch clamp experiments.

Introduction

Characterizing neuronal function on a single cell level is crucial to unraveling the biological mechanisms underlying brain activity. One of the most important techniques used in neuroscience to understand individual neuron behavior is patch clamp electrophysiology. This Nobel prize-winning technique allows one to record subthreshold current and voltage changes, enabling scientists to better understand neuronal communication. While optical methods offer a promising non-invasive method to study single neurons (Hochbaum et al., 2014; Kiskinis et al., 2018; Adam et al., 2019; Fan et al., 2020), their reliance on relative measurements rather than absolute voltage or current and suboptimal spatiotemporal resolution still require patch clamp to validate recordings of individual cellular behavior.

Typically, an in vitro patch clamp experiment is performed as follows: one views a brain slice under a microscope, manually maneuvers and delicately places a 1- to 2-μm tip of a glass pipette into contact with a 10-μm diameter cell membrane, creates a high-resistance seal between the pipette and cell membrane, and breaks into the cell to create a whole-cell configuration. This technique is immensely time intensive even for a skilled expert under optimal conditions. To improve the throughput and yield of these essential yet challenging experiments, several groups have invented automated patch clamp rigs for both in vitro (Wu et al., 2016; Kolb et al., 2019; Lewallen et al., 2019) and in vivo (Kodandaramaiah et al., 2012; Kolb et al., 2013; Annecchino et al., 2017; Stoy et al., 2017; Suk et al., 2017; Holst et al., 2019) electrophysiology, including a handful of techniques developed specifically for automated pipette localization (Long et al., 2015; Koos et al., 2017, 2021) and cell tracking (Lee et al., 2018).

One of the most challenging steps to automate in these rigs is the accurate and repeatable placement of the pipette tip close to the membrane of a cell (Long et al., 2015). Conventionally, patch pipettes are controlled by micromanipulators that have random and systematic errors on the order of 10 μm (Kolb et al., 2019) when repeatedly moving to and from the same location. A major drawback for previous pipette tip localization techniques (Long et al., 2015; Koos et al., 2017, 2021) is that the accuracy is significantly reduced when real-world background lighting variation and noise is introduced. Light scattering from the brain tissue induces significant noise in the image and renders these methods practically useless since they rely on a clear image of the pipette in acute slice experiments, despite their success in cultured cell experiments. To overcome this obstacle, we implemented a convolutional neural network (CNN), ResNet101, to automatically identify and correct the pipette tip localization error for automated in vitro patch clamp experiments. This method will not only improve the precise placement of the pipette near the cell membrane, but also reduce the time required to localize the pipette tip over a cell and therefore improve the overall throughput and efficiency of the automated patch clamp process.

Materials and Methods

Coordinate system and definition of errors

To accurately identify the pipette location for patch clamp experiments, we defined a coordinate system relative to the objective location so that the center of the field of view, with the pipette tip in focus, was considered the origin. Hereafter, the view of the brain slice under the microscope will be referred to as the xy-plane. The z direction is defined by the vertical distance perpendicular to that plane, with z = 0 at the location where the pipette was perfectly in focus.

There were three types of positioning errors addressed, as shown in Figure 1A. When moving the pipette, the pipette is commanded to move to the desired position (white), typically coincident with the center of a cell (x,y)cell in automated patch clamp experiments. Because of random and systematic errors in the three-axis manipulators, the true pipette position (blue) is not equal to the desired position, resulting in the true pipette error (tx→, ty→, tz→ ). We can estimate the true position of the pipette with the CNN, resulting in the computed CNN position (red). This CNN position has CNN error vector (cx→, cy→, cz→ ), defined by the difference between the true pipette position and the CNN position. Since we cannot determine the true pipette position during an automated patch clamp experiment, we must use the CNN position as a feedback signal. Thus, we use the difference between the desired position and the CNN position, called the measured error (mx→, my→, mz→ ), to correct the pipette’s position.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

A, Schematic of the error nomenclature used in this work. B, Example images of the CNN identifying the pipette tip over a brain slice. C, Error distribution of neural network testing dataset n = 300 images (red) compared with the true pipette error after moving to the cleaning bath for n = 32 images (blue). D, Convergence of true pipette error magnitude using the CNN as the measurement feedback in the (top) xy-plane and (bottom) z direction. The black dotted lines indicate appropriate error ranges for patch clamp experiments, at 2.5 mm in the xy-plane (half the diameter of a typical cell) and 3 mm in the z direction. The box width indicates the first and third quartiles, the white line indicates the median, and the whiskers of the box plot indicate the most extreme, non-outlier data points. E, Spatial representation of pipette tip locations after the second iteration of using the CNN for correction in the (top) xy-plane and (bottom) z direction. Black dotted lines indicate the range of one and 2 SDs.

Image collection

The image datasets used for training, validation, and testing in this work consisted of 1024 × 1280 eight-bit raw images. We used a standard electrophysiology setup (SliceScope Pro 3000, Scientifica Ltd) with PatchStar micromanipulators at a 24° approach angle. We used a 40× objective (LUMPFLFL40XW/IR, NA 0.8, Olympus) and Rolera Bolt camera (QImaging), illuminated under DIC with an infrared light-emitting diode (Scientfica). The resulting field of view was 116 × 92 μm. All animal procedures were done in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and the Georgia Institute of Technology animal care committee’s regulations.

Neural network training, validation, and testing data

To construct a representative dataset of pipette images, images of 3–5 MΩ (1- to 2-μm diameter tip) pipettes were collected over a plain background as well as with a brain slice. The motivation for this is to ensure that the network would be robust enough to identify pipettes in both scenarios, if necessary. The ground truth annotation process began by sending a pipette to a computer-generated randomized location in the xy-plane (±27 μm). The user manually annotated the location of the pipette tip, in pixels, and confirmed the pipette was in focus so that the pipette could be imaged at fixed intervals along the z-axis at this position in the xy-plane. The pipette would then automatically move down (only in the z direction) with a constant step size to a random lower limit distance of up to 100 μm, collecting images at each step and recording the manually annotated xy location and prescribed z location (based on step size) as an (x,y,z) coordinate in pixels. The step sizes were constant for each xy location, but randomized (within 5–20 μm) in between. Once at the lower limit distance, the pipette would return to the in-focus position (Z = 0) at the same xy location. To ensure that the pipette tip location was accurate, the user would again manually annotate the tip, saving the (x,y) coordinate in pixels, while in focus. The pipette would then step in the positive z direction, collecting images and recording coordinates at each step until reaching an upper limit distance. A total of 6678 raw annotated images were captured for training, validation, and testing datasets. All training and testing data will be available at https://autopatcher.org/.

Image preprocessing

All images used for training and validation were preprocessed using contrast stretching (Gonzalez and Woods, 2018) to improve the ability to identify the pipette tip. To accomplish this, we calculated the average (x¯) and SDs (σ) of the pixel intensities of each image and mapped the original pixel values to the range defined by (x¯±2σ) for each image individually. This mapping improved the contrast by reducing the range of pixel intensities, thereby making a smaller range of pixel intensities more. Any pixel intensity that was outside the range of [0,1] after mapping was set to 0 or 1, respectively. The images were then cropped to a square from the center and downsized to 224 × 224 for use with the CNN. These images were then transformed to artificially increase the training dataset, making the network more robust to different orientations of the pipette. The images, and their corresponding pipette tip location annotations, were flipped horizontally, vertically, and both horizontally and vertically. These three augmentations resulted in a total training dataset of 24,747 images and a validation dataset of 765 images. The test images underwent the same preprocessing, but no augmentations, resulting in a total of 300 test images. All preprocessing and network training was done using MATLAB 2020a and all patch clamp experiments were done with MATLAB and LabVIEW programs.

CNN training

The pretrained network model, ResNet101, was used as the basis for this work. ResNet101 is a CNN, 101 layers deep, that is trained for classification problems. The residual network family is known for performing well in classification challenges because the depth of these CNNs lead to superior performance (Zhang et al., 2018). Here, we wanted to predict the (x,y,z) location of the pipette tip based on an image. To accomplish this, we replaced the final three layers of the ResNet101 architecture with a fully connected layer and a regression layer (Lathuiliere et al., 2020). This allowed us to define the output as a continuous 3 × 1 vector, corresponding to the (x,y,z) location of the pipette tip.

The training options are summarized in Table 1. Of the optimizers available in the MATLAB Deep Learning Toolbox, the rmsprop (root mean square propagation) optimizer, or loss function, has reported the greatest accuracy (Vani and Rao, 2019). The mini batch size should be a power of 2 and maximized for accuracy (Goodfellow et al., 2016). While computer RAM availability was limited during training, we determined a mini batch size of 16 was suitable for this application. The number of epochs was determined experimentally, aiming to minimize root mean squared error (RMSE) during training while maximizing number of epochs to ensure sufficient adjusting of the CNN’s weights. It is convention to have a dynamic learning rate, so the learn rate schedule was set to piecewise, where the learn rate began at the initial learn rate and monotonically decreased by the learn rate drop factor after each drop period (in epochs; Bengio, 2012). Bengio recommended beginning with a large learning rate and reducing the rate if the training loss does not converge. After testing a few different initial learn rates and drop factors, we found a suitable learn rate schedule to follow. The validation frequency and patience were set to their default values as suggested by MATLAB (The MathWorks, 2020). We used a Dell Precision 5540 (NVIDIA GeForce GTX 1080 GPU, Intel(R) Core(TM) i7-9850H CPU @ 2.60 GHz, 32 GB RAM, Windows 10, 64-bit) to train, validate, and test the CNN. The validation data were shuffled with each epoch to prevent the CNN from over-fitting to the training and validation sets.

View this table:
  • View inline
  • View popup
Table 1

CNN training options

CNN testing

To evaluate the accuracy of the CNN pipette tip identification used with an iterative proportional feedback controller, we performed a series of experiments over acute brain slices. Specifically, a LabVIEW program randomized the initial pipette location in the field of view (within ±27 μm in the xy-plane and ± 6 μm in the z direction). The range of training data in the z direction was limited to 6 μm since pipette localization error both near the edges of the field of view and out of focus was not observed; thus, did not warrant the excess training data. The CNN used the current image to determine the position of the pipette tip. From that CNN position, the measured error vector was calculated from the origin (center of the field of view, in focus) and used to correct the pipette location back to the origin. The CNN-based pipette tip identification algorithm was run recursively for a predetermined number of iterations (1–4). To determine the true pipette error after iterative correction, the pipette tip was then manually moved to the origin and the change in the manipulator position was saved as the true pipette error.

Patch clamp experiments

We ran automated patch clamp experiments using a standard electrophysiology rig with four PatchStar micromanipulators on a universal motorized stage (Scientifica, Ltd). We used a peristaltic pump (120S/DV, Watson-Marlow) to perfuse the brain slices with buffer solution. The Multiclamp 700b amplifier (Molecular Devices) and USB-6221 OEM data acquisition board (National Instruments) to collect recordings. We used a pressure control box (Neuromatic Devices) to regulate internal pipette pressure as well as a custom machined chamber with a smaller side chamber for cleaning solution. We followed the cleaning protocol as suggested by Kolb et al. (2016); however, we did not include rinsing in the cleaning protocol because recent literature found that there is no impediment to the whole-cell yield or quality of recording (C. Landry, M. Yip, I. Kolb, WA. Stoy, M.M. Gonzalez, C.R. Forest, unpublished observation). We compared the state-of-the-art cross-correlation method for pipette detection (Kolb et al., 2019) to the CNN method presented here in two different sets of experiments. In order to remove extraneous confounding variables, none of the patch clamp experiments included the cell tracking algorithm used by Kolb et al. (2019), so that any variation because of cell tracking would not affect the success rates of the two pipette identification methods.

Code accessibility

The code used to train and test the network is included as Extended Data 1, and is also available at https://github.com/mmgxw3/pipetteFindingCNN.

Extended Data 1

The MATLAB code. Download Extended Data 1, ZIP file.

Statistical analysis

To determine statistical significance in success rates, we used the Fischer’s exact test. For the comparison between groups, we used a one-way ANOVA test, and a Tukey’s HSD test. To test for normality, we used a two-sided χ2 test that combines skew and kurtosis to test for normality (D’Agostino, 1971; D’Agostino and Pearson, 1973).

Results

Validation of pipette position identification

To determine whether the network could accurately identify the pipette tip position over a brain slice, we tested the network on a set of 300 test images, manually annotated with ground truth positions. Representative test images of the pipette over a brain slice, with the CNN position indicated in red, are shown in Figure 1B. It is crucial that the CNN errors, c→ , are smaller than the true pipette errors, t→ , that accumulate during an experiment to ensure that the pipette position error will converge. To demonstrate that the CNN errors, c→ , are smaller and more repeatable than the pipette errors from moving to the cleaning bath and back to the sample, t→ , these two distributions along each axis are displayed in Figure 1C. The mean absolute errors and SDs of the CNN errors for each of the axes are shown in Table 2.

View this table:
  • View inline
  • View popup
Table 2

CNN error from test dataset

To ensure that the CNN successfully corrected the pipette tip position, we evaluated the network’s ability to converge using the previously described testing workflow. The magnitude of the true pipette error after one to four iterations in the xy-plane (|txy→| ) and z direction (|tz→| ) are plotted in Figure 1D. While there was a significant difference between the first and second iterations (p = 0.001 Tukey’s HSD test), there was no statistical significant difference between the second and third iterations in the xy-plane (p = 0.49 Tukey’s HSD test). After the second correction, 62% of the attempts were within 1 SD (±0.31 μm) of the target location in the xy-plane (p = 0.016 D’Agostino test for normality, α = 0.05), and 86% of the attempts were within 2 SDs (±0.62 μm), as indicated by the circles in Figure 1E. Since the network was able to correct the pipette tip to less than approximately half the diameter of a typical cell (10 μm) in the xy-plane with only two iterations of the CNN, we only corrected the pipette position twice for implementation in automated patch clamp experiments. The discretization that is apparent along the y-axis is the step size of the micromanipulators, indicating we are approaching the stepper motor encoder resolution. In the z direction, 64% of attempts were within 1 SD (0.60 μm) of the target location (p = 0.026 D’Agostino test for normality, α = 0.05) and 84% of the attempts were within 2 SDs (1.2 μm), which is an acceptable range that we believed would not impair the ability of the pipette to find and patch clamp a cell. The accuracy in the z direction was less crucial since the approach method is to descend the pipette from 15 μm above the cell.

Automated patch clamp experiments

We compared the success rates of the CNN method and the state-of-the-art cross-correlation method on pipette detection, cell detection, and whole-cell recording. Success rates are defined as a fraction of all attempts using the same pipette detection method, independent of whether the previous steps were successful. Success is defined for each of the steps as follows: pipette detection is considered successful when the pipette position can be identified and corrected based on that identification. Cell detection is considered successful when the pipette resistance increases 0.2 MΩ over three consecutive descending 1-μm steps. Whole-cell patch clamp recording is defined by successful cell detection, gigaseal, and break-in. When using the CNN method, two corrections were done after the pipette is brought into the field of view, as previously described. All experiments were done over 5 d, using eight slices from five mice. The numbers of attempts with each method are 32 and 36 for the cross-correlation and CNN methods, respectively. These experiments were done independently, but prepared using the same protocols and solutions to reduce variability in slice health.

The pipette detection, cell detection, and whole-cell success rates using cross-correlation were 66%, 59%, and 37%, respectively (n = 32). The pipette detection, cell detection, and whole-cell success rates using the CNN were 100%, 92%, and 64%, respectively (n = 36). These results are summarized in Figure 2A. A Fischer’s exact test of the results indicate that the CNN improved the pipette detection success rate by 52% (p = 8e-5 Fischer’s exact test), the cell detection success rate by 54% (p = 0.001 Fischer’s exact test) and whole-cell success rate by 70% (p = 0.05 Fischer’s exact test). Moreover, the CNN method could reliably identify the pipette position regardless of the background noise in the image within 2.71 ± 0.30 s, 81% faster than the average time of the cross-correlation method, as shown in Figure 2B.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Comparing (A) pipette detection, cell detection, and whole-cell success rates and (B) time required for cross-correlation and CNN methods (n = 32 and = 36, respectively). The box width indicates the first and third quartiles, the white line indicates the median, and the whiskers of the box plot indicate the most extreme, non-outlier data points. Using Fischer's exact test; *p ≤ 0.05, ***p ≤ 0.001.

Electrophysiology data

The patch clamp experiments done with the CNN method yielded both voltage and current clamp data comparable to the quality of a manual patch clamp expert. An example image of the pipette placed on the cell is shown in Figure 3A. The whole-cell recording protocol used was the same as that of Kolb et al. (2019). The distributions of access resistance, membrane capacitance, and membrane resistance are shown in Figure 3B. The mean access resistance for recordings performed with the CNN was 14.1 MΩ, well within the accepted range among manual patch clamp experts (<40 MΩ; Kolb et al., 2019). A representative current clamp trace and the corresponding input current injection are displayed in Figure 3C. A voltage clamp trace is shown in Figure 3D.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

A, Example image of a pipette on a neuron during a whole-cell recording. B, Distributions of access resistance, membrane capacitance, and membrane resistance for n = 23 successful whole-cell patch clamp recordings using the CNN. The white lines indicate the median, the width of the boxes indicates the first and third quartiles, and the whiskers indicate the range of the data. C, Representative current clamp trace with current injection. D, Representative voltage clamp trace.

Discussion

One of the primary disadvantages of using the previously reported methods for pipette detection is the lack of reliability. With the cross-correlation method, the pipette tip could be identified for 66% of attempts (n = 32), failing because of the difference between the template image and the background noise from the brain slice (Kolb et al., 2019). The deep learning-based pipette detection method presented here offers an accurate and robust method for identifying the pipette tip position in automated patch clamp experiments both over a clear background as well as above a brain slice. Other methods of pipette tip identification have reported accuracy of 12.06 ± 4.3 μm (Long et al., 2015), 3.53 ± 2.47 μm (Koos et al., 2017), and 0.99 ± 0.55 μm (Koos et al., 2021). We reduced the 3D positioning error, using two iterations of our CNN method, to 0.62 ± 0.58 μm. The CNN can more reliably identify the pipette location in the xy-plane compared with the z-axis. This difference in error distributions is likely because of the fact that small changes of the pipette position in the z direction (moving into and out of focus) are less clearly observable when viewing the pipette under the microscope, especially when over a brain slice. However, despite this inherent ambiguity in the pipette tip position, the errors in the z direction are still significantly lower than previously reported.

By training a CNN to correct the pipette tip position during automated patch clamp experiments, we improved the success rates of pipette detection to 100% compared with the 66% success rate for cross-correlation. This ability to reliably correct the pipette every time it is in the field of view could be used for automatic calibration or real-time tracking of the pipette’s location for optimization of autopatching protocols. This method also improved the cell detection and whole-cell success rates by 54% and 70%, respectively, compared with the success rates of the cross-correlation method without use of cell tracking, demonstrating the importance of the accuracy and robustness of this crucial step in the autopatching process. Moreover, this CNN method without cell tracking performed similarly with cells 50–60 μm deep (64%) to that of Kolb et al. (2019; 60%), who reported a 60% whole-cell success rate using cross-correlation and cell tracking at the same cell depth (Kolb et al., 2019). Furthermore, the average time required to correct the pipette position using this CNN method is 81% less than the cross-correlation method, averaging 1.6 s per iteration of the CNN identification and movement of manipulators, opening doors to real-time tracking of the pipette tip during automated patch clamp experiments.

There were several limits to this study. For one, we only used one micromanipulator manufacturer (Scientifica). While there may be different error distributions between various manufacturers, we anticipate that this method would still be effective if the modified ResNet101 architecture was trained with new images specific to the objective magnification and manipulator. Further, only pipettes with resistances in the range 3–5 μm were used for training and testing since this range is standard for patch clamp experiments in vitro. Pipettes used for other applications, that are significantly narrower or wider, would need more training data to ensure the network could reliably identify the tip’s new geometry. Moreover, use with other objectives would also require collecting new training data. Finally, we omitted the use of cell tracking in the automated patch clamp experiments so that we could isolate errors and measure success rate independently of the cell tracking algorithm.

Future work could use this CNN with cell tracking to simultaneously monitor and correct the pipette location with respect to the cell, potentially leading to even greater whole-cell success rates than previously reported. Moreover, this dual-monitoring could be used to continuously monitor the access resistance and correct the pipette position to maintain this resistance during longer duration experiments. Further, the combined monitoring of the cell and pipette positions may be of great use in multi-electrode automated patch clamp experiments, in which the brain tissue moves more from the simultaneous movement of multiple pipettes in the tissue. This work represents another significant step toward unmanned robotic patch clamp development.

Acknowledgments

Acknowledgements: We thank Dr. Matthew Rowan for insightful feedback on figures and data analysis.

Footnotes

  • M.C.Y. is receiving financial compensation for his work with Neuromatic Devices. All other authors declare no competing financial interests.

  • This work was supported by the National Institutes of Health (NIH) BRAIN Initiative Grant (National Eye Institute and National Institute of Mental Health Grant 1-U01-MH106027-01), the NIH Grant R01NS102727, the NIH Single Cell Grant 1 R01 EY023173, National Science Foundation Grants EHR 0965945 and CISE 1110947, and the NIH Grant R01DA029639.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Adam Y, Kim JJ, Lou S, Zhao Y, Xie ME, Brinks D, Wu H, Mostajo-Radji MA, Kheifets S, Parot V, Chettih S, Williams KJ, Gmeiner B, Farhi SL, Madisen L, Buchanan EK, Kinsella I, Zhou D, Paninski L, Harvey CD, et al. (2019) Voltage imaging and optogenetics reveal behaviour-dependent changes in hippocampal dynamics. Nature 569:413–417. doi:10.1038/s41586-019-1166-7 pmid:31043747
    OpenUrlCrossRefPubMed
  2. ↵
    Annecchino LA, Morris AR, Copeland CS, Agabi OE, Chadderton P, Schultz SR (2017) Robotic automation of in vivo two-photon targeted whole-cell patch-clamp electrophysiology. Neuron 95:1048–1055.e3. doi:10.1016/j.neuron.2017.08.018 pmid:28858615
    OpenUrlCrossRefPubMed
  3. ↵
    Bengio Y (2012) Practical recommendations for gradient-based training of deep architectures. In: Neural networks: tricks of the trade, pp 437–478. New York: Springer, Berlin, Heidelberg.
  4. ↵
    D’Agostino RB (1971) An omnibus test of normality for moderate and large sample size. Biomatrika 58:341–348.
    OpenUrlCrossRef
  5. ↵
    D’Agostino R, Pearson ES (1973) Tests for departure from normality. Biomatrika 60:613–622. doi:10.2307/2335012
    OpenUrlCrossRef
  6. ↵
    Fan LZ, Kheifets S, Böhm UL, Wu H, Piatkevich KD, Xie ME, Parot V, Ha Y, Evans KE, Boyden ES, Takesian AE, Cohen AE (2020) All-optical electrophysiology reveals the role of lateral inhibition in sensory processing in cortical layer 1. Cell 180:521–535.e18.
    OpenUrl
  7. ↵
    Gonzalez RC, Woods RE (2018) Digital image processing, Ed 4. London: Pearson Education Limited.
  8. ↵
    Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Cambridge: The MIT Press.
  9. ↵
    Hochbaum DR, Zhao Y, Farhi SL, Klapoetke N, Werley CA, Kapoor V, Zou P, Kralj JM, MacLaurin D, Smedemark-Margulies N, Saulnier JL, Boulting GL, Straub C, Cho YK, Melkonian M, Ka Shu Wong G, Jed Harrison D, Murthy VN, Sabatini BL, et al. (2014) All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins. Nat Methods 11:825–833. doi:10.1038/nmeth.3000 pmid:24952910
    OpenUrlCrossRefPubMed
  10. ↵
    Holst GL, Stoy W, Yang B, Kolb I, Kodandaramaiah SB, Li L, Knoblich U, Zeng H, Haider B, Boyden ES, Forest CR (2019) Autonomous patch-clamp robot for functional characterization of neurons in vivo: development and application to mouse visual cortex. J Neurophysiol 121:2341–2357. doi:10.1152/jn.00738.2018 pmid:30969898
    OpenUrlCrossRefPubMed
  11. ↵
    Kiskinis E, Kralj JM, Zou P, Weinstein EN, Zhang H, Tsioras K, Wiskow O, Ortega JA, Eggan K, Cohen AE (2018) All-optical electrophysiology for high-throughput functional characterization of a human iPSC-derived motor neuron model of ALS. Stem Cell Reports 10:1991–2004.
    OpenUrl
  12. ↵
    Kodandaramaiah SB, Franzesi GT, Chow BY, Boyden ES, Forest CR (2012) Automated whole-cell patch-clamp electrophysiology of neurons in vivo. Nat Methods 9:585–587.
    OpenUrlCrossRefPubMed
  13. ↵
    Kolb I, Holst G, Goldstein B, Kodandaramaiah SB, Boyden ES, Culurciello E, Forest CR (2013) Automated, in-vivo, whole-cell electrophysiology using an integrated patch-clamp amplifier. BMC Neurosci 14:P131. doi:10.1186/1471-2202-14-S1-P131
    OpenUrlCrossRef
  14. ↵
    Kolb I, Stoy WA, Rousseau EB, Moody OA, Jenkins A, Forest CR (2016) Cleaning patch-clamp pipettes for immediate reuse. Sci Rep 6:35001–35010. doi:10.1038/srep35001 pmid:27725751
    OpenUrlCrossRefPubMed
  15. ↵
    Kolb I, Landry CR, Yip MC, Lewallen CF, Stoy WA, Lee J, Felouzis A, Yang B, Boyden ES, Rozell CJ, Forest CR (2019) PatcherBot: a single-cell electrophysiology robot for adherent cells and brain slices. J Neural Eng 16:e046003. doi:10.1088/1741-2552/ab1834 pmid:30970335
    OpenUrlCrossRefPubMed
  16. ↵
    Koos K, Molnár J, Horváth P (2017) Pipette hunter: patch-clamp pipette detection. SCIA 2017:172–183.
    OpenUrl
  17. ↵
    Koos K, Oláh G, Balassa T, Mihut N, Rózsa M, Ozsvár A, Tasnadi E, Barzó P, Faragó N, Puskás L, Molnár G, Molnár J, Tamás G, Horvath P (2021) Automatic deep learning driven label-free image guided patch clamp system for human and rodent in vitro slice physiology. Nat Commun 12. doi:10.1038/s41467-021-21291-4
    OpenUrlCrossRef
  18. ↵
    Lathuiliere S, Mesejo P, Alameda-Pineda X, Horaud R (2020) A Comprehensive analysis of deep regression. IEEE Trans Pattern Anal Mach Intell 42:2065–2081. doi:10.1109/TPAMI.2019.2910523 pmid:30990175
    OpenUrlCrossRefPubMed
  19. ↵
    Lee J, Kolb I, Forest CR, Rozell CJ (2018) Cell membrane tracking in living brain tissue using differential interference contrast microscopy. IEEE Trans Image Process 27:1847–1861. doi:10.1109/TIP.2017.2787625 pmid:29346099
    OpenUrlCrossRefPubMed
  20. ↵
    Lewallen CF, Wan Q, Maminishkis A, Stoy W, Kolb I, Hotaling N, Bharti K, Forest CR (2019) High-yield, automated intracellular electrophysiology in retinal pigment epithelia. J Neurosci Methods 328:108442. doi:10.1016/j.jneumeth.2019.108442 pmid:31562888
    OpenUrlCrossRefPubMed
  21. ↵
    Long B, Li L, Knoblich U, Zeng H, Peng H (2015) 3D image-guided automatic pipette positioning for single cell experiments in vivo. Sci Rep 5:18426–18428. doi:10.1038/srep18426 pmid:26689553
    OpenUrlCrossRefPubMed
  22. ↵
    Stoy WA, Kolb I, Holst GL, Liew Y, Pala A, Yang B, Boyden ES, Stanley GB, Forest CR (2017) Robotic navigation to subcortical neural tissue for intracellular electrophysiology in vivo. J Neurophysiol 118:1141–1150. doi:10.1152/jn.00117.2017 pmid:28592685
    OpenUrlCrossRefPubMed
  23. ↵
    Suk HJ, van Welie I, Kodandaramaiah SB, Allen B, Forest CR, Boyden ES (2017) Closed-loop real-time imaging enables fully automated cell-targeted patch-clamp neural recording in vivo. Neuron 95:1037–1047.e11. doi:10.1016/j.neuron.2017.08.011 pmid:28858614
    OpenUrlCrossRefPubMed
  24. ↵
    The MathWorks (2020) Deep learning toolbox. Natick: The MathWorks. Available at https://www.mathworks.com/help/deeplearning/.
  25. ↵
    Vani S, Rao TVM (2019) An experimental approach towards the performance assessment of various optimizers on convolutional neural network. Proceedings of the International Conference on Trends in Electronics and Informatics, ICOEI 2019, pp 331–336, Tirunelveli, India. IEEE. doi:10.1109/ICOEI.2019.8862686
    OpenUrlCrossRef
  26. ↵
    Wu Q, Kolb I, Callahan BM, Su Z, Stoy W, Kodandaramaiah SB, Neve R, Zeng H, Boyden ES, Forest CR, Chubykin AA (2016) Integration of autopatching with automated pipette and cell detection in vitro. J Neurophysiol 116:1564–1578. doi:10.1152/jn.00386.2016 pmid:27385800
    OpenUrlCrossRefPubMed
  27. ↵
    Zhang K, Sun M, Han TX, Yuan X, Guo L, Liu T (2018) Residual networks of residual networks: multilevel residual networks. IEEE Trans Circuits Syst Video Technol 28:1303–1314. doi:10.1109/TCSVT.2017.2654543
    OpenUrlCrossRef

Synthesis

Reviewing Editor: Alexxai Kravitz, Washington University in St. Louis

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Suhasa Kodandaramaiah.

Consensus comments:

1) The paper states that code will be available at a Github link. Code should be live before publication. Training and testing data should also be made available if possible. (Note that eNeuro’s “double blind” review requires them to redact this link during review).

2) Methodological details were insufficient, as detailed in Reviewer comments below. These should be addressed in full.

3) The authors should clarify what is mean in Figure 2 by success rate. ie: what do the percentages for cell detection and whole cell patching mean? For CC method, is the success percentage defined as percentage of all trials (successful and unsuccessful pipette detection) or success percentage within successful pipette detection trials?

4) The patching data shown in Figure 3 shows one representative example. The authors should provide group metrics for successfully patched cells so a reader can confirm the quality and variance of the data collected with this approach.

5) More information on the size of field of view and the angle of approach should be provided, as well as information on how changes in these variables affect the accuracy of the pipette detection. If a new pipette, or a new angle of approach is used, does the algorithm require re-training?

6) While a comparison is made with the cross correlation method, there is no citation for which cross correlation method was used.

Original reviewer comments:

*****************

Reviewer 1

*****************

There has been progress made in recent years automating patch clamping in vivo and in vitro, and increasing sophisticated algorithms that are being used for guide patch pipettes to target cells of interest. These advances could in the very near future lead to truly human out of the loop autonomous patch clamping robots. One sub-issue with these automation efforts specific to image guided patching using DIC optics in vivo is need to accurately determine the location of the patch pipette tip using computer vision. This paper deals with this issue. While this is a niche application, patch clamp electrophysiology is still the gold standard, and a faster, more accurate pipette tip detect algorithm that can be incorporated into fully autonomous patch clamp robots could be beneficial to the field. Particularly if this can be combined wire recent advances in deep learning based cell detection algorithms being developed for automated patch clamping systems. Overall, there are however key methodological details that are missing (See specific comments below). The paper relies on testing pipettes oriented at a single angle, and it is not clear currently, how much variability was incorporated in both the training dataset and the final experiments. As a final comment, the paper is currently written for an audience that is already well versed in automated patch clamping, with a very narrow focus pipette tip detection - which makes it unsuitable in its current form for a broader neuroscience audience.

Major Comments

"Light scattering from the brain tissue induces significant noise in the image and renders these methods practically useless since they rely on a clear image of the pipette” Strong statement considering these approaches are still useful for patching in cultured cell preparations.

- A major claim of the paper is that previous pipette tip localization work is prone to errors arising due to variations in lighting. While they report top-line numbers, there is little detail with regards to the testing conditions with the two methods. What were the lighting an imaging conditions used in training dataset and the experiments? In other works, will the algorithm work similar to the results shown in this work across multiple groups? There is little methodological details provided.

- Along the same lines, it is unclear how the training dataset was acquired, annotated.

- For this to be easily implemented in other groups, both training datasets and experiment datasets should be made available (I know these may be TBs of data).

- There are methodological details missing in the description of the data pre-processing step.

- The health of the slice can have a significant effect on eventual success rate of whole cell patch clamping. In figure 2, a comparison of CNN and cross correlation methods are provided. It is not clear how the trials were conducted. Were both methods attempted on multiple cells in the same slice? Or were these independent experiments? How many replicates does the data shown represent?

- Success rates for pipette detection, cell detection and whole cell patching are provided. Unclear what percentages for cell detection and whole cell patching mean here. i.e., for CC method, is the success percentage defined as percentage of all trials (successful and unsuccessful pipette detection) or success percentage within successful pipette detection trials. If it is the latter, this is a surprising result as you wouldn’t expect such a large increase in success rate of patching. Do the authors have an explanation for this? If it is the latter, the increase in success rate is entirely due to the better pipette positioning.

- The number of experimental sessions, replicates information is missing in the discussion or success rates.

- As described, the convolution neural network algorithm is only viable within +/-27μm of the origin in the xy plane and +/-6μm of “optimal focus” in the z direction. How late is the field over view relative to this area? I’m not sure if this is a small region around the center of the image or significant portion of the image. How is optimum focus defined as? Depending on the numerical aperture of the lens, that itself may be a few micrometers in range. What happens is the pipette is not at optimal focus?

- The patching data shown in Figure 3 is entirely representative.

- While a comparison is made with the cross correlation method, there is no citation for which cross correlation method was used. There is one instance where they discuss multiple previous methods.

- Was the algorithm tested for pipettes are different approach angles (which is common is slice patching experiments)?

- How long does the acquisition of training dataset and training take? The 81% reduction is time taken to detect the pipette tip is significant and impressive.

Minor Comments

- Use of informal ‘micron’ instead of micrometer.

- Ref. 20 Koos et al is now a published article. I believe Koos Nat Comm 2021.

- In order to remove extraneous confounding variables, 153 none of the patch clamp experiments included the cell tracking algorithm used by Kolb et al. so that any variation due to cell tracking would not affect the success rates of the two pipette identification methods There are three papers authored by Kolb here. A more specific citation is needed.

- With the cross correlation method, the pipette tip could be identified for 66% of attempts 216 (n = 32), failing due to the difference between the template image and the background noise from the brain slice. Please insert citation (I know it is earlier).

- Line 117: RAM itself is a Random-Access Memory no need to say RAM memory

- p vector is cited in the text, but figure 1 doesn’t have a p vector

- Sentence in abstract should read “whole-cell [patch clamp] success rate

- Y-axis label is missing in Figure 2b

*****************

Reviewer 2

*****************

Machine learning-based pipette positional correction for automatic patch clamp in vitro

The paper is describing a new method for the identification of the pipette tip during patch-clamp experiments. The method uses a convolutional neural network (CNN) based on ResNet101 to identify the pipette tip online during a patch attempt. The reported accuracy is 0.62 μm. There are also reported improvements in the whole-cell patch-clamp success rate.

Overall, it is an important technological development. Pipette tip detection is crucial for automated patch clamp. Using a CNN network for recognition and detection of the pipette tip is definitely important, considering the authors provide a detailed description of the process and make the code available.

However, there are several issues related to this paper:

1) Not sure if this algorithm can detect a pipette tip “de novo.” One would expect the same pipette tip detection algorithm to be used for both initial tip detection and then for correction.

2) The second related issue comes from the fact that previous pipette tip detection methods have been published, which reduces the novelty of this paper.

3) Finally, this paper describes only the positional correction of the detected pipette tip and doesn’t really have any other additional data or automated patch clamp development, including any novel neuroscience data.

Back to top

In this issue

eneuro: 8 (4)
eNeuro
Vol. 8, Issue 4
July/August 2021
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Machine Learning-Based Pipette Positional Correction for Automatic Patch Clamp In Vitro
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Machine Learning-Based Pipette Positional Correction for Automatic Patch Clamp In Vitro
Mercedes M. Gonzalez, Colby F. Lewallen, Mighten C. Yip, Craig R. Forest
eNeuro 26 July 2021, 8 (4) ENEURO.0051-21.2021; DOI: 10.1523/ENEURO.0051-21.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Machine Learning-Based Pipette Positional Correction for Automatic Patch Clamp In Vitro
Mercedes M. Gonzalez, Colby F. Lewallen, Mighten C. Yip, Craig R. Forest
eNeuro 26 July 2021, 8 (4) ENEURO.0051-21.2021; DOI: 10.1523/ENEURO.0051-21.2021
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Abstract
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • Automated
  • CNN
  • deep learning
  • electrophysiology
  • machine learning
  • patch clamp

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • A New Tool for Quantifying Mouse Facial Expressions
  • Validation of a New Coil Array Tailored for Dog Functional Magnetic Resonance Imaging Studies
  • Photothrombotic Middle Cerebral Artery Occlusion in Mice: A Novel Model of Ischemic Stroke
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • A New Tool for Quantifying Mouse Facial Expressions
  • Validation of a New Coil Array Tailored for Dog Functional Magnetic Resonance Imaging Studies
  • Photothrombotic Middle Cerebral Artery Occlusion in Mice: A Novel Model of Ischemic Stroke
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.