Abstract
It is often claimed that tools are embodied by their user, but whether the brain actually repurposes its body-based computations to perform similar tasks with tools is not known. A fundamental computation for localizing touch on the body is trilateration. Here, the location of touch on a limb is computed by integrating estimates of the distance between sensory input and its boundaries (e.g., elbow and wrist of the forearm). As evidence of this computational mechanism, tactile localization on a limb is most precise near its boundaries and lowest in the middle. Here, we show that the brain repurposes trilateration to localize touch on a tool, despite large differences in initial sensory input compared with touch on the body. In a large sample of participants, we found that localizing touch on a tool produced the signature of trilateration, with highest precision close to the base and tip of the tool. A computational model of trilateration provided a good fit to the observed localization behavior. To further demonstrate the computational plausibility of repurposing trilateration, we implemented it in a three-layer neural network that was based on principles of probabilistic population coding. This network determined hit location in tool-centered coordinates by using a tool’s unique pattern of vibrations when contacting an object. Simulations demonstrated the expected signature of trilateration, in line with the behavioral patterns. Our results have important implications for how trilateration may be implemented by somatosensory neural populations. We conclude that trilateration is likely a fundamental spatial computation that unifies limbs and tools.
- computation
- embodiment
- space
- tactile localization
- tool use
Significance Statement
It is often claimed that tools are embodied by the user, but computational evidence for this claim is scarce. We show that to localize touch on a tool, the brain repurposes a fundamental computation for localizing touch on the body, trilateration. A signature of trilateration is high localization precision near the boundaries of a limb and low precision in the middle. We find that localizing touch on a tool produces this signature of trilateration, which we characterize using a computational model. We further demonstrate the plausibility of embodiment by implementing trilateration within a three-layer neural network that transforms a tool’s vibrations into a tool-centered spatial representation. We conclude that trilateration is a fundamental spatial computation that unifies limbs and tools.
Introduction
The proposal that the brain treats a tool as if it were an extended limb (tool embodiment) was first made over a century ago (Head and Holmes, 1911). From the point of view of modern neuroscience, embodiment would entail that the brain reuses its sensorimotor computations when performing the same task with a tool as with a limb. There is indirect evidence that this is the case (for review, see Maravita and Iriki, 2004; Martel et al., 2016), such as the ability of tool-users to accurately localize where a tool has been touched (Miller et al., 2018) just as they would on their own body. Several studies have highlighted important similarities between tool-based and body-based tactile spatial processing (Yamamoto and Kitazawa, 2001; Kilteni and Ehrsson, 2017; Miller et al., 2018), including at the neural level in the activity of fronto-parietal regions (Miller et al., 2019a; Pazen et al., 2020; Fabio et al., 2022). Tool use also modulates somatosensory perception and action processes (Cardinali et al., 2009, 2011, 2012, 2016; Sposito et al., 2012; Canzoneri et al., 2013; Miller et al., 2014, 2017, 2019b; Garbarini et al., 2015; Martel et al., 2019; Romano et al., 2019).
The above findings are suggestive that functional similarities between tools and limbs exist. However, direct evidence that body-based computational mechanisms are repurposed to sense and act with tools is lacking. For this to be possible, the nervous system would need to resolve the differences in the sensory input following touch on the skin or a tool. Unlike the skin, tools are not innervated with mechanoreceptors. Touch location is instead initially encoded in the tool’s mechanical response; for example, in how it vibrates when striking an object (Miller et al., 2018). Repurposing a body-based neural computation to perform the same function for a tool (i.e., embodiment) requires overcoming this key difference in the sensory input signal. The present study uses tool-based tactile localization (Miller et al., 2018) as a case study to provide the first neurocomputational test of embodiment.
Tactile localization on the body is often characterized by greater precision near body-part boundaries (e.g., joints or borders), a phenomenon called perceptual anchoring (Cholewiak and Collins, 2003; de Vignemont et al., 2009). A recent study found converging evidence that perceptual anchors are the signature of trilateration (Miller et al., 2022), a computation used by surveyors to localize an object within a map. To do so, a surveyor estimates the object’s distance from multiple landmarks of known positions. When applied to body maps (Fig. 1A, bottom), a “neural surveyor” localizes touch on a body part by estimating the distance between sensory input and body-part boundaries (e.g., the wrist and elbow for the forearm). To estimate the touch location in limb-centered coordinates, these two distance estimates can be integrated to produce a Bayes-optimal location percept (Ernst and Banks, 2002; Körding and Wolpert, 2004; Clemens et al., 2011). Consistent with Weber’s Law and log-coded spatial representations (Petzschner et al., 2015), noise in each distance estimate increased linearly as a function of distance (Fig. 1B). Integrating them resulted in an inverted U-shaped noise profile across the surface, with the lowest noise near the boundaries and highest noise in the middle (i.e., perceptual anchoring).
In the present study, we investigated whether trilateration is repurposed to localize touch a tool (Fig. 1A). If this is indeed the case, localizing touch on a tool would be characterized by an inverted U-shaped pattern of variable errors across its surface (Fig. 1B). We first provide a theoretical formulation of how trilateration could be repurposed to sense with a tool, arguing that the brain uses the tool’s vibrational properties to stand-in for a representation for the physical space of the tool (Miller et al., 2018). In this formulation, trilateration is repurposed by computing over a vibratory feature space (Fig. 2), using its boundaries as proxies for the boundaries of physical tool space. Distance estimates (Fig. 1A) are then computed within a neural representation of the feature space, just like they would be for a representation of body space. Next, we characterize the ability of participants to localize touch on a tool (Fig. 1C) and use computational modeling to verify the expected computational signature of trilateration (Fig. 1B). Finally, we use neural network modeling to implement the vibration-to-location transformation required for trilaterating touch location on a tool, providing one possible mechanism for how embodiment is implemented. In all, our findings solidify the plausibility of trilateration as the computation underlying tactile localization on both limbs and tools.
Materials and Methods
Theoretical formulation of trilateration
In the present section, we provide a theoretical formulation of trilateration and how it can be applied to localizing touch within a somatosensory-derived coordinate system, be it centered on a body part or the surface of a tool (Fig. 1A). The general computational goal of trilateration is to estimate the location of an object by calculating its distance from vantage points of known position, which we will refer to as landmarks. Applied to tactile localization, this amounts to estimating the location of touch by averaging over distance estimates taken from the boundaries of the sensory surface (Fig. 1A), which serve as the landmarks and are assumed to be known to the nervous system via either learning or sensory feedback (Longo et al., 2010). For a body part (e.g., forearm), the landmarks are often its joints (e.g., wrist and elbow) and lateral sides. For simple tools such as rods, the landmarks correspond to their handle and tip, previous research has shown that users can sense their positions from somatosensory feedback during wielding (Debats et al., 2012).
We will first consider the general case of localizing touch within an unspecified somatosensory coordinate system. For simplicity, we will consider only a single dimension of the coordinate system, with localization between its two boundaries. We propose that the somatosensory system only needs three spatial variables,
Trilateration performs the necessary computation to transform
Given the above distance estimates (Eq. 2), we can derive two estimates of touch location
These two location estimates can be used to derive a final estimate. However, given the presence of distance-dependent noise, the precision of each estimate will vary across the sensory surface (Fig. 1B). Assuming a flat prior for touch location, the statistically optimal solution (i.e., maximum likelihood) is to integrate both estimates:
Here, the mean (
The integrated posterior
Computing a tool-centered spatial code with trilateration
Let us now consider the more specific case of performing trilateration for touch on a tool (Fig. 1A, top). Because the tool surface is not innervated, spatial information does not arise from a distribution of receptors but must instead be inferred from sensory information during tool-object contact. However, as we will see, this information forms a feature space that can computationally stand in for the real physical space of the tool (Fig. 2). Trilateration can be performed on this feature space, leading to a tool-centered code.
As with the body, the somatosensory system needs three variables,
Touch location can therefore be encoded in a unique combination of modal amplitudes, called vibratory motifs. These motifs form a multidimensional feature space that forms a vibration-to-location isomorphism (Fig. 2B). Theoretically, this isomorphic mapping between the feature space of the vibrations and tool-centered space can computationally stand in for the physical space of the rod. We can therefore re-conceptualize the three initial spatial variables,
Neural network implementation for trilateration on a tool
Somatosensory regions are characterized by spatial maps of the surface of individual body parts (Penfield and Boldrey, 1937). Based on this notion, the above formulation of trilateration to tactile localization on the body surface was implemented in a biologically inspired two-layer feedforward neural network (Miller et al., 2022). The first layer consisted of units that were broadly tuned to touch location in skin-based coordinates, as is thought to be encoded by primary somatosensory cortex (S1). The second layer consisted of units whose tuning was characterized by distance-dependent gradients (either in peak firing rate and/or tuning width) that were anchored to one of the joints. They therefore embodied the distance computation as specified in Equations 2, 3. A Bayesian decoder demonstrated that the behavior of this network matched what would be expected by optimal trilateration (Equations 2–5), displaying distance-dependent noise and an inverted U-shaped variability following integration.
While this network relies on the observation that individual primary somatosensory neurons are typically tuned to individual regions of the skin (Delhaye et al., 2018), can it also be re-used for performing trilateration in vibration space? The vibratory motifs are unlikely to be spatially organized across the cortical surface. Instead, the nervous system must internalize the isomorphic mapping between the motifs and the physical space of the tool (Fig. 2). Disrupting the expected vibrations disrupts localization (Miller et al., 2018), suggesting that the user has internal models of rod dynamics (Imamizu et al., 2000). We assume that this internal model is implemented in units that are tuned to the statistics of the vibratory motifs.
We implemented the trilateral computation (Eqs. 2–5) in a three-layer neural network with four processing stages (Fig. 3). First, the amplitudes of each mode are estimated by a population of units with subpopulations tuned to each resonant mode (layer 1). Second, activation in each subpopulation is integrated by units tuned to the multidimensional statistics of the motifs (layer 2). This layer effectively forms the internal model of the feature space that is isomorphic to the rod’s physical space. Next, this activation pattern is transformed into tool-centered coordinates (Eqs. 2, 3) via two decoding subpopulations whose units are tuned to distance from the boundaries of the feature space (Eq. 3; layer 3). The population activity of each decoding subpopulations reflects the likelihoods in Equation 4 (Jazayeri and Movshon, 2006). Lastly, the final tool-centered location estimate is derived by a Bayesian decoder (Ma et al., 2006) that integrates the activity of both subpopulations (Eq. 5).
The feature space of vibrations is multidimensional, composed of a theoretically infinite number of modes. However, only the first five modes (Fig. 2A) are typically within the bandwidth of mechanoreceptors (i.e., ∼10–1000 Hz; Johansson and Flanagan, 2009). The first layer of our network was therefore composed of units tuned to the amplitudes of these modes (Fig. 3A, bottom). This layer was composed of five subpopulations that each encode an estimate of the amplitude of a specific mode. These units were broadly-tuned with Gaussian (bell-shaped) tuning curves
The amplitude
θ of each mode is tied directly to the stimulus location
L (Miller et al., 2018). The function of the next layer is to integrate the estimated amplitudes of each mode, encoded in
The function of the third layer was to estimate the location of
L in tool-centered coordinates given the population response
As in the previous neural network for body-centered tactile localization (Miller et al., 2022), the distance computation (Eqs. 2, 3) was embodied by distance-dependent gradients in the tuning of units
When neuronal noise is Poisson-like (as in Eq. 7), the gain of a neural population response reflects the precision (i.e., inverse variance) of its estimate (Ma et al., 2006). Therefore, given the aforementioned distance-dependent gradient in gain, noise in each subpopulation’s location estimate (that is, its uncertainty) will increase as a function of distance from a landmark (i.e., the handle or tip). Consistent with several studies (Jazayeri and Movshon, 2006; Ma et al., 2006), we assume that the population responses encode log probabilities. We can therefore decode a maximum likelihood estimates of each subpopulation as follows:
The above neural network, with a different encoding layer, implements trilateration for localizing touch in body-centered coordinates. Our present neural network (Equations 6–12) generalizes the Bayesian formulation of trilateration (Equations 2–5) to localizing touch on a tool, using a vibratory feature space as a proxy for tool-centered space. The flow of activity in this network can be visualized at Figure 3B, where the touch occurs at 75% the surface of the tool.
To systematically investigate the behavior of this network, we simulated 5000 instances of touch at a wide range of locations (10% to 90% of the space) on the tool surface using the above network. The input into the neural network were the mode amplitudes
θ for the corresponding location
L. For simplicity we did not model the actual process of mode decomposition from the spiking behavior of mechanoreceptors (Miller et al., 2018), but we did assume that the process is affected by sensory noise (Faisal et al., 2008). Therefore, for each simulation, the input (
Behavioral experiment
Participants
Forty right-handed participants (24 females, 23.7 ± 2.5 years of age) in total completed our behavioral experiments. Two participants were removed because of inability to follow task instructions, leaving thirty-eight in total to be analyzed. All participants had normal or corrected-to-normal vision and no history of neurologic impairment. Every participant gave informed consent before the experiment. The study was approved by the ethics committee (CPP SUD EST IV, Lyon, France).
Experimental procedure
During the task, participants were seated comfortably in a cushioned chair with their torso aligned with the edge of a table and their right elbow placed in a padded arm rest. The entire arm was hidden from view with a long occluding board. A 60-cm-long rod (handle length: 12 cm; cross-sectional radius: 0.75 cm) was placed in their right hand. This rod was either wooden (25 participants) or PVC (thirteen participants). The arm was placed at a height necessary for a 1-cm separation between the object (see below) and the rod at a posture parallel with the table. On the surface of the table, an LCD screen (70 × 30 cm) lay backside down in the length-wise orientation; the edge of the LCD screen was 5 cm from the table’s edge. The center of the screen was aligned with the participant’s midline.
The task of participants was to localize touches resulting from active contact between the rod and an object (foam-padded wooden block). In an experimental session, participants completed two tasks with distinct reporting methods (order counterbalanced across participants). In the image-based task, participants used a cursor to indicate the corresponding location of touch on a downsized drawing of a rod (20 cm in length; handle to tip); the purpose of using a downsized drawing was to dissociate it from the external space occupied by the real rod. The drawing began 15 cm from the edge of the table, was raised 5 cm above the table surface, and was oriented in parallel with the real rod. The red cursor (circle, 0.2-cm radius) was constrained to move in the center of the screen occupied by the drawing. In the space-based task, participants used a cursor to indicate the corresponding location of touch within an empty LCD screen (white background). The cursor was constrained to move along the vertical bisection of the screen and could be moved across the entire length of the screen. It is critical to note that in this task, participants were forced to rely on somatosensory information about tool length and position as no other sensory cues were available to do so.
The trial structure for each task was as follows. In the “precontact phase,” participants sat facing the computer screen with their left hand on a trackball. A red cursor was placed at a random location within the vertical bisection of the screen. A “go” cue (brief tap on the right shoulder) indicated that they should actively strike the object with the rod. In the “localization phase,” participants made their task-relevant judgment with the cursor, controlled by the trackball. Participants never received feedback about their performance. To minimize auditory cues during the task, pink noise was played continuously over noise-cancelling headphones.
The object was placed at one of six locations, ranging from 10 cm from the handle to the tip (10–60 cm from the hand; steps of 10 cm). The number of object locations was unknown to participants. In each task, there were ten trials per touch location, making 60 trials per task and 120 trials in total. The specific location for each trial was chosen pseudo-randomly. The entire experimental session took ∼45 min.
The experiment started with a 5-min sensorimotor familiarization session. Participants were told to explore, at their own pace, how the tool felt to contact the object at different locations. They were instructed to pay attention to how the vibrations varied with impact location. Visual and auditory feedback of the tool and tool-object contact was prevented with a blindfold and pink noise, respectively. Participants were, however, allowed to hold the object in place with their left hand while contacting it with the tool but were not allowed to haptically explore the rod.
At the end of the space-based task, participants used the cursor to report where they felt the tip of the rod (aligned in-parallel to the screen). The judged location of the tip (mean: 56.5 cm; SEM: 1.62 cm) was very similar to the rod’s actual length (i.e., 60 cm). It is critical to reiterate here that participants had never seen the rod prior up to this point of the experiment, and likely relied on somatosensory feedback about its dimensions.
Data analysis
Regression analysis
Before analysis, all judgments in the image-based task were converted from pixels of drawing space to percentage of tool space. All judgments in the space-based task were normalized such that their estimated tip location corresponded to 100% of tool space. We then used least-squares linear regression to analyze the localization accuracy. The mean localization judgment for each touch location was modelled as a function of actual object location. Accuracy was assessed by comparing the group-level confidence intervals around the slope and intercept.
Trilateration model
Our model of trilateration in the somatosensory system assumes that the perceived location of touch is a consequence of the optimal integration of two independent location estimates,
The three parameters in the model (
Boundary truncation model
Boundary truncation provides one alternative model to trilateration This model assumes that the estimate of location
We fit this truncation model to the participant-level variable errors in each of our experiments. The standard deviation for each location,
Model comparisons
We used the Bayesian Information Criterion (BIC) to compare the boundary and trilateration models. The difference in the BIC (ΔBIC) was used to determine a significant difference in fit. Consistent with convention, the chosen cutoff for moderate evidence was a ΔBIC of 2 and the cutoff for strong evidence was a ΔBIC of 6.
Data and code availability
The neural network and anonymized behavioral data have been deposited in the Open Science Framework (DOI 10.17605/OSF.IO/ERBGW).
Results
Accurate localization of touch on a tool
In the current experiment (n = 38), we investigated whether tactile localization on a 60 cm hand-held rod is characterized by the U-shaped pattern of variability (Fig. 1B) that is characteristic of trilateration when localizing touch on the body. In two tasks, we measured participants’ ability to localize an object that was actively contacted with a hand-held tool. In the image-based task, participants indicated the point of touch on a downsized drawing of the tool. In the space-based task, participants indicated the point of touch in external space. The latter task ensured that localization was not truncated by boundaries in the range of possible responses.
Consistent with prior results (Miller et al., 2018), we found that participants were generally quite accurate at localizing touch on the tool. Linear regressions (Fig. 4A) comparing perceived and actual hit location found slopes near unity both the image-based task (mean slope: 0.93, 95% CI [0.88, 0.99]) and the space-based task (mean slope: 0.89, 95% CI [0.82, 0.95]). Analysis of the variable errors (2 × 6 repeated measures ANOVA) found a significant main effect of hit location (F(5,185) = 36.1, p < 0.001) but no main effect of task (F(1,37) = 0.39, p = 0.54) or an interaction (F(5,185) = 0.21, p = 0.96). Crucially, the pattern of variable errors (Fig. 4B) in both tasks displayed the hypothesized inverted U-shape, which was of similar magnitude to what has been observed for touch on the arm (Cholewiak and Collins, 2003; Miller et al., 2022).
Computational modeling of behavior
We next used computational modeling to confirm that the observed pattern of variable errors was indeed because of trilateration. We fit each participant’s variable errors with a probabilistic model of optimal trilateration (Fig. 1A,B) that was derived from its theoretical formulation (see Materials and Methods). We compared the trilateration model to an alternative hypothesis: the inverted U-shaped pattern is because of truncation at the boundaries of localization (Petzschner et al., 2015), which cuts off the range of possible responses and thus produces lower variability at these boundaries. We fit a boundary truncation model to directly compare to our trilateration model. Given the lack of a main effect of task and to increase statistical power, we collapsed across both tasks sin this analysis.
Our computational model of trilateration provided a good fit to the variable errors observed during tactile localization on a tool. This was evident at the group-level, where the magnitude of variable errors was similar to what has been found when localizing touch on the arm (Fig. 5A). We further observed a high coefficient of determination at the level of individual participants (median R2: 0.75; range: 0.29–0.95); indeed, 30 out of 38 participants had an R2 > 0.6. The fits of the trilateration model to the data of six randomly chosen participants can be seen in Figure 5B. The fits of the trilateration model each participant’s behavior can be seen in Extended Data Figures 5-1 and 5-2. In contrast, the R2 of the boundary truncation model was substantially lower than the trilateration model (median: 0.30; range: −0.19–0.71), never showing a better fit to the data in any participant (Fig. 6A).
Extended Data Figure 5-1
Trilateration model fits for participants S1–S19
Fit of the trilateration model to the variable error (black dots) of participants S1–S19 (top-to-bottom; left-to-right). The purple line corresponds to the model fit. The goodness of fit is displayed as the R2. Download Figure 5-1, EPS file.
Extended Data Figure 5-2
Trilateration model fits for participants S20–S38
Fit of the trilateration model to the variable error (black dots) of participants S20–S38 (top-to-bottom; left-to-right). The purple line corresponds to the model fit. The goodness of fit is displayed as the R2. Download Figure 5-2, EPS file.
We next compared each model directly using the Bayesian Information Criteria (BIC). The BIC score for the trilateration model was lower in all 38 participants (mean ± SD; trilateration: 11.88 ± 5.88; truncation: 18.74 ± 4.70). Statistically, 33 participants showed moderate evidence (ΔBIC > 2) and 20 participants showed strong evidence (ΔBIC > 6) in favor of trilateration (Fig. 6B). In total, our results strongly suggest that, as with the body, localizing touch on a tool is consistent with a computation via trilateration.
Neural network simulations
Finally, we simulated trilateration on a tool using a biologically inspired neural network with a similar architecture as we have done previously. The goal of these simulations was to concretely demonstrate that the feature space of vibratory motifs could stand in for the physical space of the rod. Our neural network thus took the mode amplitudes as input and trilaterated the resulting touch location in tool-centered coordinates (5000 simulations per location).
Both the mode and feature space layers of the neural network (Fig. 3, bottom and middle) produced unbiased sensory estimates with minimal uncertainty (Extended Data Fig. 7-1). Crucially, both subpopulations in the distance-computing layer (layer 3; Fig. 3, top) were able to localize touch with minimal constant error (Fig. 7A, upper panel), demonstrating that each could produce unbiased estimates of location from the sensory input. However, as predicted given the gradient in tuning parameters, the noise in their estimates rapidly increased as a function of distance from each landmark (Fig. 7B, upper panel), forming an X-shaped pattern across the surface of the tool.
Extended Data Figure 7-1
Intermediate output of the Mode and Feature layers
(A) Localization accuracy for the sensory estimates decoded from Mode (top panel) and Feature layers (bottom panel). Note that the ‘location’ decoded here is best conceptualized as within the vibratory feature space, as spatial localization is done via trilateration at higher layers of the network (B) Uncertainty in the sensory estimates decoded from the Mode (top panel) and the Feature layers (bottom panel). Download Figure 7-1, EPS file.
We next examined the output of the Bayesian decoder from Equations 11, 12 (Fig. 7, lower panel). As expected, we observed the computational signature of trilateration. Integrating both estimates resulted in an inverted U-shaped pattern of decoding noise across the surface of the tool (Fig. 7B, lower panel), with the lowest decoding noise near the landmarks and the highest decoding variance in the middle. Crucially, this is the exact pattern of variability we observed in our behavioral experiments (see above) and have previously observed for tactile localization on the body. These simulations establish the plausibility of trilateration as a computation that can turn a vibratory code into a spatial representation.
Discussion
If tools are embodied by the sensorimotor system, we would expect that the brain repurposes its body-based sensorimotor computations to perform similar tasks with tools. Using tactile localization as our case study, we uncovered multiple pieces of evidence that are consistent with this embodied view. First, as is the case for body parts, we observed that localizing touch on the surface of a tool is characterized by perceptual anchors at the handle and tip (de Vignemont et al., 2009). Second, computational modeling of behavioral responses suggests that they are the result of the probabilistic computation involving trilateration. Indeed, perceptual anchors are a computational signature of trilateration. Finally, using a simple three-layer population-based neural network, we demonstrated the possibility of trilateration in the vibratory feature space evoked by touches on tools. This neural network transformed a vibration-based input into a spatial code, reproducing perceptual anchors on the tool surface. These findings go well beyond prior research on embodiment (Martel et al., 2016) by identifying a computation that functionally unifies tools and limbs. Indeed, they suggest that trilateration is a spatial computation employed by the somatosensory system to localize touch on body parts and tools alike (Miller et al., 2022). They further have important implications for how trilateration would be repurposed at a neural level for tool-extended sensing.
If trilateration is a fundamental spatial computation used by the somatosensory system, it should be employed to solve the same problem (i.e., localization) regardless of whether the sensory surface is the body or a tool. Previous tactile localization studies have reported increased perceptual precision near the boundaries of the hands (Elithorn et al., 1953; Miller et al., 2022), arm (Cholewiak and Collins, 2003; de Vignemont et al., 2009; Miller et al., 2022), feet (Halnan and Wright, 1960), and abdomen (Cholewiak et al., 2004). These perceptual anchors are a signature of a trilateration computation (Miller et al., 2022). The results of the present study are consistent with the use of trilateration to localize touch on tools as well.
Our findings provide computational evidence that tools are embodied in the sensorimotor system (Martel et al., 2016), an idea that was proposed over a century ago (Head and Holmes, 1911). The close functional link between tools and limbs is not just a superficial resemblance but rather a reflection of the repurposing of neurocomputational resources dedicated to sensing and acting with a limb to that with a tool (Makin et al., 2017). This repurposing may be one reason that tool use leads to measurable changes in body perception and action processes (Cardinali et al., 2009; Canzoneri et al., 2013; Miller et al., 2014, 2019a).
Whereas the present study focused on simply-shaped tools (i.e., straight rods), tactile localization is also possible on more complexly-shaped tools (Yamamoto et al., 2005). We propose that trilateration also underlies tactile localization on these tools. We leveraged our trilateration model to simulate patterns of tactile localization on rods with different numbers of segments (Fig. 8). For multisegmented limbs (e.g., the arm), trilateration occurs locally within each segment (Cholewiak and Collins, 2003; Miller et al., 2022). That is, the signature inverted U-shaped pattern of variability is observed within each segment (e.g., upper and lower arms). Our simulations suggested that the same would be true for multisegmented tools (Fig. 8B). We predict that tactile localization within each segment of a rod would be characterized by the signature pattern of variability indicative of trilateration.
Although trilateration was repurposed for localizing touch on a rod, we observed a noticeable difference in the overall shape of variable error between localizing touch on a rod versus limb (e.g., the arm; Fig. 5A). Whereas localization uncertainty (i.e., variable error) is typically symmetric about the center of a limb (Miller et al., 2022), uncertainty was asymmetric for the rod. Specifically, variable errors were lower near the handle than the tip, peaking away from the center of the rod and toward the tip. These patterns of variable error were also visible in the behavior of individual participants (Extended Data Figs. 5-1 and 5-2) and are a direct consequence of differences in the baseline uncertainty of each distance estimate (Eq. 13), as demonstrated by simulations in Miller et al. (2022).
There are at least two potential sources for these differences in baseline uncertainty. First, striking the rod near the tip may produce less consistent sensory information (i.e., vibrations), translating into greater sensory uncertainty of where the rod is touched. However, this explanation is unlikely since the hypothesized differences in sensory consistency were not observed in a previous study that characterized a rod’s vibratory motifs (Miller et al., 2018). Instead, the source of this difference may lie in the uncertainty of where each boundary is perceived in space via proprioceptive feedback (Eq. 3). The location of the handle is well-defined, as it corresponds to the position of the hand. The location of the tip is less well defined, as it must be extracted indirectly from proprioceptive feedback from the forelimb (Debats et al., 2012). This likely corresponds to higher estimation uncertainty of its position in space, contributing to greater baseline uncertainty of the tip-based distance estimate (Eq. 13). Future studies should attempt to adjudicate between these two hypotheses.
Another important difference between limbs and tools is the sensory input used to derive localization estimates. While the skin is innervated with sensory receptors, the somatosensory system must “tune into” a tool’s mechanical response to extract meaningful information from it. It was previously proposed that where a rod is touched is encoded by the amplitudes of its resonant responses when contacting an object (Miller et al., 2018, 2019b). These resonant modes form a feature space that is isomorphic with the physical space of the tool. At a peripheral level, these resonances are re-encoded by the spiking patterns of tactile mechanoreceptors (Johnson, 2001). Therefore, unlike for touch on the body, localizing touch on a tool requires the somatosensory system to perform a temporal-to-spatial transformation.
We used neural network simulations to embody the necessary transformations to implement trilateration on a tool. Our neural network assumes that the human brain contains neural populations that encode for the full feature space of rod vibration. While very little is known about how these types of naturalistic vibrations are represented by the somatosensory system, our modeling results and prior research (Miller et al., 2018, 2019a) suggest that there are neural populations that encode their properties. Previous work demonstrated that individual neurons in primary somatosensory cortex multiplex both amplitude and frequency in their firing properties (Harvey et al., 2013). Recent evidence further suggests that human S1 is tuned to individual vibration frequencies (Wang and Yau, 2021). Our neural network modeling assumes that there are also neurons tuned to the amplitude of specific frequencies, though direct empirical evidence for this tuning is currently lacking. The existence of this coding would be consistent with the finding that S1 performs the initial stages of localization on a rod (Miller et al., 2019a). Furthermore, resonant amplitudes are crucial pieces of information in the natural statistics of vibrations, making it plausible that they are encoded at some stage of processing. Our results therefore open up a new avenue for neurophysiological investigations into how naturalistic vibrations are encoded by the somatosensory system.
The present study demonstrates the biological possibility that the resonant feature space can stand in for the physical space of the tool, allowing for trilateration to be performed to localize touch in tool-centered coordinates. It is interesting to note that the present neural network had a similar structure to one we previously demonstrated could perform trilateration on the body surface. The biggest difference is the input layer, which must first encode the vibration information. However, once this is transformed into the representation of the feature space, the computation proceeds as it would for the body. Note that this does not necessitate that the same neural populations localize touch on limbs and tools (Schone et al., 2021), but only that the same computation is performed when localizing touch on both surfaces. Our network therefore provides a concrete demonstration of what it means to repurpose a body-based computation to localize touch on a tool. The repurposing of the neural network architecture for trilateration explains tool embodiment and the emergence of a shared spatial code between tools and skin.
Footnotes
The authors declare no competing financial interests.
- Received March 22, 2023.
- Revision received September 13, 2023.
- Accepted September 25, 2023.
- Copyright © 2023 Miller et al.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.