Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
Research ArticleResearch Article: New Research, Sensory and Motor Systems

A Somatosensory Computation That Unifies Limbs and Tools

Luke E. Miller, Cécile Fabio, Frédérique de Vignemont, Alice Roy, W. Pieter Medendorp and Alessandro Farnè
eNeuro 17 October 2023, 10 (11) ENEURO.0095-23.2023; https://doi.org/10.1523/ENEURO.0095-23.2023
Luke E. Miller
1Integrative Multisensory Perception Action and Cognition Team-ImpAct, Lyon Neuroscience Research Center, Institut National de la Santé et de la Recherche Médicale Unité 1028, Centre National de la Recherche Scientifique Unité 5292, 69500 Bron, France
2UCBL, University of Lyon 1, 69100 Villeurbanne, France
3Neuro-immersion, Hospices Civils de Lyon, 69500 Bron, France
4Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD, Nijmegen, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Luke E. Miller
Cécile Fabio
1Integrative Multisensory Perception Action and Cognition Team-ImpAct, Lyon Neuroscience Research Center, Institut National de la Santé et de la Recherche Médicale Unité 1028, Centre National de la Recherche Scientifique Unité 5292, 69500 Bron, France
2UCBL, University of Lyon 1, 69100 Villeurbanne, France
3Neuro-immersion, Hospices Civils de Lyon, 69500 Bron, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Frédérique de Vignemont
7Institut Jean Nicod, Department of Cognitive Studies, Ecole Normale Superieure, Paris Sciences et Lettres University, 75005 Paris, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alice Roy
6Laboratoire Dynamique du Langage, Centre National de la Recherche Scientifique, Unité Mixte de Recherche 5596, 69007 Lyon, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
W. Pieter Medendorp
4Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD, Nijmegen, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for W. Pieter Medendorp
Alessandro Farnè
1Integrative Multisensory Perception Action and Cognition Team-ImpAct, Lyon Neuroscience Research Center, Institut National de la Santé et de la Recherche Médicale Unité 1028, Centre National de la Recherche Scientifique Unité 5292, 69500 Bron, France
2UCBL, University of Lyon 1, 69100 Villeurbanne, France
3Neuro-immersion, Hospices Civils de Lyon, 69500 Bron, France
5Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

Abstract

It is often claimed that tools are embodied by their user, but whether the brain actually repurposes its body-based computations to perform similar tasks with tools is not known. A fundamental computation for localizing touch on the body is trilateration. Here, the location of touch on a limb is computed by integrating estimates of the distance between sensory input and its boundaries (e.g., elbow and wrist of the forearm). As evidence of this computational mechanism, tactile localization on a limb is most precise near its boundaries and lowest in the middle. Here, we show that the brain repurposes trilateration to localize touch on a tool, despite large differences in initial sensory input compared with touch on the body. In a large sample of participants, we found that localizing touch on a tool produced the signature of trilateration, with highest precision close to the base and tip of the tool. A computational model of trilateration provided a good fit to the observed localization behavior. To further demonstrate the computational plausibility of repurposing trilateration, we implemented it in a three-layer neural network that was based on principles of probabilistic population coding. This network determined hit location in tool-centered coordinates by using a tool’s unique pattern of vibrations when contacting an object. Simulations demonstrated the expected signature of trilateration, in line with the behavioral patterns. Our results have important implications for how trilateration may be implemented by somatosensory neural populations. We conclude that trilateration is likely a fundamental spatial computation that unifies limbs and tools.

  • computation
  • embodiment
  • space
  • tactile localization
  • tool use

Significance Statement

It is often claimed that tools are embodied by the user, but computational evidence for this claim is scarce. We show that to localize touch on a tool, the brain repurposes a fundamental computation for localizing touch on the body, trilateration. A signature of trilateration is high localization precision near the boundaries of a limb and low precision in the middle. We find that localizing touch on a tool produces this signature of trilateration, which we characterize using a computational model. We further demonstrate the plausibility of embodiment by implementing trilateration within a three-layer neural network that transforms a tool’s vibrations into a tool-centered spatial representation. We conclude that trilateration is a fundamental spatial computation that unifies limbs and tools.

Introduction

The proposal that the brain treats a tool as if it were an extended limb (tool embodiment) was first made over a century ago (Head and Holmes, 1911). From the point of view of modern neuroscience, embodiment would entail that the brain reuses its sensorimotor computations when performing the same task with a tool as with a limb. There is indirect evidence that this is the case (for review, see Maravita and Iriki, 2004; Martel et al., 2016), such as the ability of tool-users to accurately localize where a tool has been touched (Miller et al., 2018) just as they would on their own body. Several studies have highlighted important similarities between tool-based and body-based tactile spatial processing (Yamamoto and Kitazawa, 2001; Kilteni and Ehrsson, 2017; Miller et al., 2018), including at the neural level in the activity of fronto-parietal regions (Miller et al., 2019a; Pazen et al., 2020; Fabio et al., 2022). Tool use also modulates somatosensory perception and action processes (Cardinali et al., 2009, 2011, 2012, 2016; Sposito et al., 2012; Canzoneri et al., 2013; Miller et al., 2014, 2017, 2019b; Garbarini et al., 2015; Martel et al., 2019; Romano et al., 2019).

The above findings are suggestive that functional similarities between tools and limbs exist. However, direct evidence that body-based computational mechanisms are repurposed to sense and act with tools is lacking. For this to be possible, the nervous system would need to resolve the differences in the sensory input following touch on the skin or a tool. Unlike the skin, tools are not innervated with mechanoreceptors. Touch location is instead initially encoded in the tool’s mechanical response; for example, in how it vibrates when striking an object (Miller et al., 2018). Repurposing a body-based neural computation to perform the same function for a tool (i.e., embodiment) requires overcoming this key difference in the sensory input signal. The present study uses tool-based tactile localization (Miller et al., 2018) as a case study to provide the first neurocomputational test of embodiment.

Tactile localization on the body is often characterized by greater precision near body-part boundaries (e.g., joints or borders), a phenomenon called perceptual anchoring (Cholewiak and Collins, 2003; de Vignemont et al., 2009). A recent study found converging evidence that perceptual anchors are the signature of trilateration (Miller et al., 2022), a computation used by surveyors to localize an object within a map. To do so, a surveyor estimates the object’s distance from multiple landmarks of known positions. When applied to body maps (Fig. 1A, bottom), a “neural surveyor” localizes touch on a body part by estimating the distance between sensory input and body-part boundaries (e.g., the wrist and elbow for the forearm). To estimate the touch location in limb-centered coordinates, these two distance estimates can be integrated to produce a Bayes-optimal location percept (Ernst and Banks, 2002; Körding and Wolpert, 2004; Clemens et al., 2011). Consistent with Weber’s Law and log-coded spatial representations (Petzschner et al., 2015), noise in each distance estimate increased linearly as a function of distance (Fig. 1B). Integrating them resulted in an inverted U-shaped noise profile across the surface, with the lowest noise near the boundaries and highest noise in the middle (i.e., perceptual anchoring).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Model of trilateration and tool-sensing paradigm. A, The trilateral computation applied to the space of the arm (bottom) a hand-held rod (top). Distance estimates from sensory input (black) and each boundary (d1 and d2) are integrated (purple) to form a location estimate. B, In our model, the noise in each distance estimate (d1, d2) increases linearly with distance. The integrated estimate forms an inverted U-shaped pattern. C, Two tool-sensing tasks used to characterize tactile localization on a hand-held rod. The purple arrow corresponds to the location of touch in tool-centered space. The red square corresponds to the judgment of location within the computer screen.

In the present study, we investigated whether trilateration is repurposed to localize touch a tool (Fig. 1A). If this is indeed the case, localizing touch on a tool would be characterized by an inverted U-shaped pattern of variable errors across its surface (Fig. 1B). We first provide a theoretical formulation of how trilateration could be repurposed to sense with a tool, arguing that the brain uses the tool’s vibrational properties to stand-in for a representation for the physical space of the tool (Miller et al., 2018). In this formulation, trilateration is repurposed by computing over a vibratory feature space (Fig. 2), using its boundaries as proxies for the boundaries of physical tool space. Distance estimates (Fig. 1A) are then computed within a neural representation of the feature space, just like they would be for a representation of body space. Next, we characterize the ability of participants to localize touch on a tool (Fig. 1C) and use computational modeling to verify the expected computational signature of trilateration (Fig. 1B). Finally, we use neural network modeling to implement the vibration-to-location transformation required for trilaterating touch location on a tool, providing one possible mechanism for how embodiment is implemented. In all, our findings solidify the plausibility of trilateration as the computation underlying tactile localization on both limbs and tools.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Vibration modes and feature space. A, The shape of the first five modes ω for contact on a cantilever rod. The weight of each mode varies as a function of hit location. Each hit location is characterized by a unique combination of mode weights. B, The vibration-location feature space (purple) from handle (X1) to tip (X2). This feature space is isomorphic with the actual physical space of the rod. ω corresponds to a resonant frequency, the black dot corresponds to the hit location (as in Fig. 1A) within the feature space, and the arrows are the gradients of distance estimation during trilateration.

Materials and Methods

Theoretical formulation of trilateration

In the present section, we provide a theoretical formulation of trilateration and how it can be applied to localizing touch within a somatosensory-derived coordinate system, be it centered on a body part or the surface of a tool (Fig. 1A). The general computational goal of trilateration is to estimate the location of an object by calculating its distance from vantage points of known position, which we will refer to as landmarks. Applied to tactile localization, this amounts to estimating the location of touch by averaging over distance estimates taken from the boundaries of the sensory surface (Fig. 1A), which serve as the landmarks and are assumed to be known to the nervous system via either learning or sensory feedback (Longo et al., 2010). For a body part (e.g., forearm), the landmarks are often its joints (e.g., wrist and elbow) and lateral sides. For simple tools such as rods, the landmarks correspond to their handle and tip, previous research has shown that users can sense their positions from somatosensory feedback during wielding (Debats et al., 2012).

We will first consider the general case of localizing touch within an unspecified somatosensory coordinate system. For simplicity, we will consider only a single dimension of the coordinate system, with localization between its two boundaries. We propose that the somatosensory system only needs three spatial variables, {x1,x2,x3} , to derive an estimate L̃ of the actual location of touch L in surface-centered coordinates. The variables x1 and x2 correspond to the proximal and distal boundaries, respectively. The variable x3 corresponds to the sensory input. Because of noise (Faisal et al., 2008), the nervous system does not represent variables as point estimates but as probability densities over some range of values (Pouget et al., 2013). Assuming normally-distributed noise, each variable xi can be thus thought of as a Gaussian likelihood: p(xi|Xi)=N(xi;Xi,σi2), (1)where the mean Xi corresponds to its true spatial position and the variance σi2 corresponds to the uncertainty in its internal estimate. Here, X1 and X2 are the true positions of the landmarks (i.e., boundaries) and X3 is the position of the sensory input. It is important to note here that these positions can be specified within any shared coordinate system. For example, touch on the body is thought to initially be represented in skin-based coordinates (Medina and Coslett, 2010), not coordinates centered on a limb. The relationship between X3 and L therefore remains ambiguous without the proper computation to transform it into the actual surface-centered coordinates (Longo et al., 2010).

Trilateration performs the necessary computation to transform x3 into surface-centered coordinates (Miller et al., 2022). It does so by calculating its distance from the proximal and distal boundaries of the coordinate system (x1 and x2 , respectively), producing two additional estimates: p(d1|x1,x3)=N(d1;X3−X1,σ12(d1)) p(d2|x2,x3)=N(d2;X2−X3,σ22(d2)), (2)where each distance estimate di corresponds to a Gaussian likelihood with a mean equal to the distance between X3 and the respective boundary and a variance that scales with distance. That is, localization estimates are more precise when the touch is physically closer to a boundary than when it is farther away (Fig. 1B). This distance-dependent noise is consistent with coding distance in log space (Petzschner et al., 2015) and is a consequence of how distance computation is implemented by a neural decoder (see below).

Given the above distance estimates (Eq. 2), we can derive two estimates of touch location L̃i that are aligned within a common coordinate system: p(L̃1|L)=p(x1|X1)+p(d1|x1,x3) p(L̃2|L)=p(x2|X2)−p(d2|x2,x3). (3)

These two location estimates can be used to derive a final estimate. However, given the presence of distance-dependent noise, the precision of each estimate will vary across the sensory surface (Fig. 1B). Assuming a flat prior for touch location, the statistically optimal solution (i.e., maximum likelihood) is to integrate both estimates: p(L|L̃1,L̃2)∝p(L̃1|L)p(L̃2|L). (4)

Here, the mean (μINT) and variance (σINT2) of the integrated surface-centered posterior distribution depend on the means (μ1 and μ2 ) variances (σ12 and σ22 ) of the individual estimates: μINT=(μ1σ12+μ2σ22)σINT2,σINT2=σ12σ22σ12+σ22. (5)

The integrated posterior p(L|L̃1,L̃2) thus reflects the maximum-likelihood estimate of touch location L. Given that the noise in each individual estimate scales linearly with distance, integration has the consequence of producing an inverted U-shaped pattern of variance (Fig. 1B). This pattern of variability serves as a computational signature of trilateration, which has been observed for tactile localization on the arm and fingers (Miller et al., 2022). The present study investigates whether this is the case for localizing touch on a hand-held rod. Our computational analyses implement this probabilistic model of trilateration (see below).

Computing a tool-centered spatial code with trilateration

Let us now consider the more specific case of performing trilateration for touch on a tool (Fig. 1A, top). Because the tool surface is not innervated, spatial information does not arise from a distribution of receptors but must instead be inferred from sensory information during tool-object contact. However, as we will see, this information forms a feature space that can computationally stand in for the real physical space of the tool (Fig. 2). Trilateration can be performed on this feature space, leading to a tool-centered code.

As with the body, the somatosensory system needs three variables, {x1,x2,x3} , to derive an estimate L̃ of the actual location of touch L in tool-centered coordinates. The representational nature of these variables depends on the type of sensory information that encodes where a tool was touched. We have previously argued that touch location is encoded in rod’s resonant frequencies (Miller et al., 2018). The frequencies of these modes are determined by the physical properties of the rod, such as its length and material. However, the relative amplitude of each mode is determined by touch location (Fig. 2A), a pattern that is invariant across rods. The link between location and amplitude is captured by the shape of the modes.

Touch location can therefore be encoded in a unique combination of modal amplitudes, called vibratory motifs. These motifs form a multidimensional feature space that forms a vibration-to-location isomorphism (Fig. 2B). Theoretically, this isomorphic mapping between the feature space of the vibrations and tool-centered space can computationally stand in for the physical space of the rod. We can therefore re-conceptualize the three initial spatial variables, {x1,x2,x3} , in relation to the isomorphism. The estimates x1 and x2 encode the location of the proximal and distal boundaries within the feature space, respectively. The estimate x3 encodes the sensory input, which in our case is the vibration amplitude in each mode. Once the nervous system has learned the isomorphic mapping, the trilateral computation (Eqs. 2–5) can be used to derive an estimate of the tool-centered location of touch (Fig. 2B). To concretely demonstrate this possibility, we implemented this isomorphic mapping in a simple neural network.

Neural network implementation for trilateration on a tool

Somatosensory regions are characterized by spatial maps of the surface of individual body parts (Penfield and Boldrey, 1937). Based on this notion, the above formulation of trilateration to tactile localization on the body surface was implemented in a biologically inspired two-layer feedforward neural network (Miller et al., 2022). The first layer consisted of units that were broadly tuned to touch location in skin-based coordinates, as is thought to be encoded by primary somatosensory cortex (S1). The second layer consisted of units whose tuning was characterized by distance-dependent gradients (either in peak firing rate and/or tuning width) that were anchored to one of the joints. They therefore embodied the distance computation as specified in Equations 2, 3. A Bayesian decoder demonstrated that the behavior of this network matched what would be expected by optimal trilateration (Equations 2–5), displaying distance-dependent noise and an inverted U-shaped variability following integration.

While this network relies on the observation that individual primary somatosensory neurons are typically tuned to individual regions of the skin (Delhaye et al., 2018), can it also be re-used for performing trilateration in vibration space? The vibratory motifs are unlikely to be spatially organized across the cortical surface. Instead, the nervous system must internalize the isomorphic mapping between the motifs and the physical space of the tool (Fig. 2). Disrupting the expected vibrations disrupts localization (Miller et al., 2018), suggesting that the user has internal models of rod dynamics (Imamizu et al., 2000). We assume that this internal model is implemented in units that are tuned to the statistics of the vibratory motifs.

We implemented the trilateral computation (Eqs. 2–5) in a three-layer neural network with four processing stages (Fig. 3). First, the amplitudes of each mode are estimated by a population of units with subpopulations tuned to each resonant mode (layer 1). Second, activation in each subpopulation is integrated by units tuned to the multidimensional statistics of the motifs (layer 2). This layer effectively forms the internal model of the feature space that is isomorphic to the rod’s physical space. Next, this activation pattern is transformed into tool-centered coordinates (Eqs. 2, 3) via two decoding subpopulations whose units are tuned to distance from the boundaries of the feature space (Eq. 3; layer 3). The population activity of each decoding subpopulations reflects the likelihoods in Equation 4 (Jazayeri and Movshon, 2006). Lastly, the final tool-centered location estimate is derived by a Bayesian decoder (Ma et al., 2006) that integrates the activity of both subpopulations (Eq. 5).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Neural network implementation of trilateration. A, Neural network implementation of trilateration: (lower panel) the Mode layer is composed of subpopulations (two shown here) sensitive to the weight of individual modes (Fig. 2A), which are location-dependent; (middle panel). Feature layer takes input from the mode layer and encodes the feature space (Fig. 2B), which forms the isomorphism with the physical space of the tool; (upper panel) the Distance layer is composed of two subpopulations of neurons with distance-dependent gradients in tuning properties (shown: firing rate and tuning width). The distance of a tuning curve from its “anchor” is coded by the luminance, with darker colors corresponding to neurons that are closer to the spatial boundary. B, Activations for each layer of the network averaged over 5000 simulations when touch was at 0.75 (space between 0 and 1). Each dot corresponds to a unit of the neural network. (lower panel) mode layer, with three of five subpopulations shown; (middle panel) feature layer; (upper panel) distance layer of localization for each decoding subpopulation.

The feature space of vibrations is multidimensional, composed of a theoretically infinite number of modes. However, only the first five modes (Fig. 2A) are typically within the bandwidth of mechanoreceptors (i.e., ∼10–1000 Hz; Johansson and Flanagan, 2009). The first layer of our network was therefore composed of units tuned to the amplitudes of these modes (Fig. 3A, bottom). This layer was composed of five subpopulations that each encode an estimate of the amplitude of a specific mode. These units were broadly-tuned with Gaussian (bell-shaped) tuning curves fM of the following form: fM(θ)=κ(exp[−(θ−μ)22σ2]), (6)where κ is the peak firing rate (i.e., gain), μ is the tuning center related to the amplitude of the specific mode, θ is the mode amplitude of the stimulus, and σ2 is the variance of the tuning curve. We modelled the response properties of these units for a given contact location on the rod with likelihood functions p(riM|θ) denoting the probability that mode amplitude θ caused riE spikes in encoding unit i. The likelihood function p(riM|θ) was modeled as a Poisson probability distribution with a Fano factor of one according to the following equation: p(riM|θ)=e−fiM(θ)fiM(θ)riMriM!, (7)where fiM is the tuning curve of unit i. The population response of the encoding units is denoted by a vector rM≡{r1M,... , rNM} , where riM is the spike count of unit i.

The amplitude θ of each mode is tied directly to the stimulus location L (Miller et al., 2018). The function of the next layer is to integrate the estimated amplitudes of each mode, encoded in rM, into a representation of the feature space that can be directly linked to L. It does so via units with bell-shaped tuning curves fS over the feature space (Fig. 3A, middle). The population activity rS of this layer is a combination of (1) the synaptic input WS⋅rM , where “ ⋅” is the dot product and WS is the matrix of all synaptic weights; and (2) the uninherited Poisson noise in the unit’s spiking behavior (Eq. 7). Each unit i in the second layer was fully connected to each unit in the first layer via a vector of synaptic weights wiS , which was set to be proportional to rM for each touch location L. For simplicity, the input into the second layer (fS(j)) corresponded to the winner-take-all of the synaptic input (j=argmax(WS⋅rM) .

The function of the third layer was to estimate the location of L in tool-centered coordinates given the population response rS in the feature space layer. We implemented this computation in two independent decoding subpopulations, each of which was “anchored” to one of the boundaries of the feature space (Fig. 3A, top). The population activity rD of each subpopulation corresponded to: riD=wiD⋅rs+ϵi , where wiD is the vector of synaptic weights connecting unit i to the second layer and ϵi is the uninherited Poisson noise in the unit’s spiking behavior (Eq. 7). Each unit in the decoding layer was fully connected to each unit in the encoding layer via wD . We used the MATLAB function fmincon to find the positive-valued weight vector that produced the decoding unit’s prespecified tuning curve (see below).

As in the previous neural network for body-centered tactile localization (Miller et al., 2022), the distance computation (Eqs. 2, 3) was embodied by distance-dependent gradients in the tuning of units fD in each decoding subpopulation. The gain κ of these units formed a distance-dependent gradient (close-to-far: high-to-low gain) across the length of the feature space: κ(d)=κ0(1+βd)2, (8)where κ0 corresponds to the gain of the tuning curve centered on the landmark’s location (i.e., distance zero), d is the distance from the center of the tuning curve (d≥0 ) and the landmark, and β is a scaling factor. The width σ of each tuning curve can be uniform in either linear or log space. In the latter case, tuning width also forms a distance-dependent gradient (close-to-far: narrow-to-wide tuning) in linear space (Nieder and Miller, 2003), consistent with the Weber–Fechner Law: σ(d)=(γlog(d+1)+1)σ0, (9)where σ0 corresponds to the width of the tuning curve centered on the landmark’s location, d is the distance from the center of the tuning curve and the landmark (d≥0 ), and γ is a scaling factor. It is important to note that these units fD are tuned to the feature space, not the vibrations themselves (as in the encoding layer). Given the isomorphism, we can therefore link their response properties directly to the location of touch L.

When neuronal noise is Poisson-like (as in Eq. 7), the gain of a neural population response reflects the precision (i.e., inverse variance) of its estimate (Ma et al., 2006). Therefore, given the aforementioned distance-dependent gradient in gain, noise in each subpopulation’s location estimate (that is, its uncertainty) will increase as a function of distance from a landmark (i.e., the handle or tip). Consistent with several studies (Jazayeri and Movshon, 2006; Ma et al., 2006), we assume that the population responses encode log probabilities. We can therefore decode a maximum likelihood estimates of each subpopulation as follows: p(L̃1|L,rD1)=exp(hD1(L)⋅rD1) p(L̃2|L,rD2)=exp(hD2(L)⋅rD2), (10)where hD is a kernel and rD is the subpopulation response. When neural responses are characterized by independent Poisson noise (Eq. 7), hD is equivalent to the log of each subpopulation’s tuning curve fD at value L (Jazayeri and Movshon, 2006; Ma et al., 2006). Assuming that the population response reflects log probabilities, optimally integrating both estimates (Eq. 5) amounts to simply summing the activity of each subpopulation. p(L̃INT|L,rD1,rD2)=exp(hD1(L)⋅rD1 + hD2(L)⋅rD2), (11)where the optimal estimate L̃INT on a given trial n can be written as the location for which the log-likelihood of the summed population responses is maximal: L̃INT(n)=argmaxL(hD1(L)⋅rD1 + hD2(L)⋅rD2). (12)

The above neural network, with a different encoding layer, implements trilateration for localizing touch in body-centered coordinates. Our present neural network (Equations 6–12) generalizes the Bayesian formulation of trilateration (Equations 2–5) to localizing touch on a tool, using a vibratory feature space as a proxy for tool-centered space. The flow of activity in this network can be visualized at Figure 3B, where the touch occurs at 75% the surface of the tool.

To systematically investigate the behavior of this network, we simulated 5000 instances of touch at a wide range of locations (10% to 90% of the space) on the tool surface using the above network. The input into the neural network were the mode amplitudes θ for the corresponding location L. For simplicity we did not model the actual process of mode decomposition from the spiking behavior of mechanoreceptors (Miller et al., 2018), but we did assume that the process is affected by sensory noise (Faisal et al., 2008). Therefore, for each simulation, the input (θ[L]) was corrupted by Gaussian noise with a standard deviation of 0.5 (units: % of space). The values for the above parameters in all layers can be seen in Table 1. All units of each layer shared the same parameter values. We used a maximum log-likelihood decoder to localize touch from the overall response of each subpopulation separately or integrated.

View this table:
  • View inline
  • View popup
Table 1

Neural network parameter values

Behavioral experiment

Participants

Forty right-handed participants (24 females, 23.7 ± 2.5 years of age) in total completed our behavioral experiments. Two participants were removed because of inability to follow task instructions, leaving thirty-eight in total to be analyzed. All participants had normal or corrected-to-normal vision and no history of neurologic impairment. Every participant gave informed consent before the experiment. The study was approved by the ethics committee (CPP SUD EST IV, Lyon, France).

Experimental procedure

During the task, participants were seated comfortably in a cushioned chair with their torso aligned with the edge of a table and their right elbow placed in a padded arm rest. The entire arm was hidden from view with a long occluding board. A 60-cm-long rod (handle length: 12 cm; cross-sectional radius: 0.75 cm) was placed in their right hand. This rod was either wooden (25 participants) or PVC (thirteen participants). The arm was placed at a height necessary for a 1-cm separation between the object (see below) and the rod at a posture parallel with the table. On the surface of the table, an LCD screen (70 × 30 cm) lay backside down in the length-wise orientation; the edge of the LCD screen was 5 cm from the table’s edge. The center of the screen was aligned with the participant’s midline.

The task of participants was to localize touches resulting from active contact between the rod and an object (foam-padded wooden block). In an experimental session, participants completed two tasks with distinct reporting methods (order counterbalanced across participants). In the image-based task, participants used a cursor to indicate the corresponding location of touch on a downsized drawing of a rod (20 cm in length; handle to tip); the purpose of using a downsized drawing was to dissociate it from the external space occupied by the real rod. The drawing began 15 cm from the edge of the table, was raised 5 cm above the table surface, and was oriented in parallel with the real rod. The red cursor (circle, 0.2-cm radius) was constrained to move in the center of the screen occupied by the drawing. In the space-based task, participants used a cursor to indicate the corresponding location of touch within an empty LCD screen (white background). The cursor was constrained to move along the vertical bisection of the screen and could be moved across the entire length of the screen. It is critical to note that in this task, participants were forced to rely on somatosensory information about tool length and position as no other sensory cues were available to do so.

The trial structure for each task was as follows. In the “precontact phase,” participants sat facing the computer screen with their left hand on a trackball. A red cursor was placed at a random location within the vertical bisection of the screen. A “go” cue (brief tap on the right shoulder) indicated that they should actively strike the object with the rod. In the “localization phase,” participants made their task-relevant judgment with the cursor, controlled by the trackball. Participants never received feedback about their performance. To minimize auditory cues during the task, pink noise was played continuously over noise-cancelling headphones.

The object was placed at one of six locations, ranging from 10 cm from the handle to the tip (10–60 cm from the hand; steps of 10 cm). The number of object locations was unknown to participants. In each task, there were ten trials per touch location, making 60 trials per task and 120 trials in total. The specific location for each trial was chosen pseudo-randomly. The entire experimental session took ∼45 min.

The experiment started with a 5-min sensorimotor familiarization session. Participants were told to explore, at their own pace, how the tool felt to contact the object at different locations. They were instructed to pay attention to how the vibrations varied with impact location. Visual and auditory feedback of the tool and tool-object contact was prevented with a blindfold and pink noise, respectively. Participants were, however, allowed to hold the object in place with their left hand while contacting it with the tool but were not allowed to haptically explore the rod.

At the end of the space-based task, participants used the cursor to report where they felt the tip of the rod (aligned in-parallel to the screen). The judged location of the tip (mean: 56.5 cm; SEM: 1.62 cm) was very similar to the rod’s actual length (i.e., 60 cm). It is critical to reiterate here that participants had never seen the rod prior up to this point of the experiment, and likely relied on somatosensory feedback about its dimensions.

Data analysis

Regression analysis

Before analysis, all judgments in the image-based task were converted from pixels of drawing space to percentage of tool space. All judgments in the space-based task were normalized such that their estimated tip location corresponded to 100% of tool space. We then used least-squares linear regression to analyze the localization accuracy. The mean localization judgment for each touch location was modelled as a function of actual object location. Accuracy was assessed by comparing the group-level confidence intervals around the slope and intercept.

Trilateration model

Our model of trilateration in the somatosensory system assumes that the perceived location of touch is a consequence of the optimal integration of two independent location estimates, L̃1 and L̃2 . This is exemplified in our formulation of trilateration (Equations 1–5). Trilateration predicts that noise in each estimate varies linearly as a function of the distance of touch from two landmarks (Eq. 2; Fig. 1B), corresponding to the handle and tip. For any location of touch L along a tactile surface, the variance in each landmark-specific location estimate L̃ can therefore be written as follows: σ12=(ε̂1 + d1σ̂)2 σ22=(ε̂2 + d2σ̂)2, (13)in which ε̂ is a landmark-specific intercept term that likely corresponds to uncertainty in the location of each landmark, d is the distance of touch location L from the landmark (Eqs. 2, 3), and σ̂ is the magnitude of noise per unit of distance. We assume that the noise term σ̂ corresponds to a general property of the underlying neural network and therefore model it as the same value for each landmark. The distance-dependent noise for the integrated estimate is therefore: σINT=σ12σ22σ12 + σ22. (14)

The three parameters in the model (σ̂ , ε̂1 , and ε̂2 ) are properties of the underlying neural processes that implement trilateration and are therefore not directly observable. They must therefore be inferred using a reverse engineering approach, where they serve as free parameters that are fit to each participant’s variable errors. We simultaneously fit the three free parameters to the data using nonlinear least squares regression. Optimal parameter values were obtained through maximum likelihood estimation using the MATLAB routine fmincon. All modeling was done with the combined data from both localization tasks. R2 values for each participant in each experiment were taken as a measure of the goodness-of-fit between the observed and predicted pattern of location-dependent noise.

Boundary truncation model

Boundary truncation provides one alternative model to trilateration This model assumes that the estimate of location L̃ corresponds to a Gaussian likelihood whose variance is identical at all points on the rod. The inverted U-shaped variability arises because these likelihoods are truncated by a boundary, either by the range of possible responses or by a categorical boundary (e.g., between handle and tip). As in Equation 1, we can model each likelihood p(L̃|L) as a normal distribution N(μL,σL) where μL is the location of touch L and σL is the standard deviation. The posterior estimate p(L|L̃) then corresponds to a likelihood truncated at γ1 and γ2 , where γ2>γ1 . Doing so will distort the mean and variance of the posterior estimate.

We fit this truncation model to the participant-level variable errors in each of our experiments. The standard deviation for each location, σT(L) , was determined by truncating a normal distribution at γ1 and γ2 using the makedist and truncate functions in MATLAB. The model therefore had three free parameters, σT , γ1 and γ2 . The value of σT was constrained between 1 and 40; γ1 between −30 and 30; and γ2 between 70 and 130 (units: % of rod surface). These ranges, particularly for γ1 and γ2 , are quite unrealistic but were chosen to maximize a good fit with the variable errors. Fitting procedures for this model were the same as the trilateration model.

Model comparisons

We used the Bayesian Information Criterion (BIC) to compare the boundary and trilateration models. The difference in the BIC (ΔBIC) was used to determine a significant difference in fit. Consistent with convention, the chosen cutoff for moderate evidence was a ΔBIC of 2 and the cutoff for strong evidence was a ΔBIC of 6.

Data and code availability

The neural network and anonymized behavioral data have been deposited in the Open Science Framework (DOI 10.17605/OSF.IO/ERBGW).

Results

Accurate localization of touch on a tool

In the current experiment (n = 38), we investigated whether tactile localization on a 60 cm hand-held rod is characterized by the U-shaped pattern of variability (Fig. 1B) that is characteristic of trilateration when localizing touch on the body. In two tasks, we measured participants’ ability to localize an object that was actively contacted with a hand-held tool. In the image-based task, participants indicated the point of touch on a downsized drawing of the tool. In the space-based task, participants indicated the point of touch in external space. The latter task ensured that localization was not truncated by boundaries in the range of possible responses.

Consistent with prior results (Miller et al., 2018), we found that participants were generally quite accurate at localizing touch on the tool. Linear regressions (Fig. 4A) comparing perceived and actual hit location found slopes near unity both the image-based task (mean slope: 0.93, 95% CI [0.88, 0.99]) and the space-based task (mean slope: 0.89, 95% CI [0.82, 0.95]). Analysis of the variable errors (2 × 6 repeated measures ANOVA) found a significant main effect of hit location (F(5,185) = 36.1, p < 0.001) but no main effect of task (F(1,37) = 0.39, p = 0.54) or an interaction (F(5,185) = 0.21, p = 0.96). Crucially, the pattern of variable errors (Fig. 4B) in both tasks displayed the hypothesized inverted U-shape, which was of similar magnitude to what has been observed for touch on the arm (Cholewiak and Collins, 2003; Miller et al., 2022).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Localization and variable error for both tasks. A, Regressions fit to the localization judgments for both the image-based (blue) and space-based (orange) tasks. Error bars correspond to the group-level 95% confidence interval. B, Group-level variable errors for both tasks. Error bars correspond to the group-level 95% confidence interval.

Computational modeling of behavior

We next used computational modeling to confirm that the observed pattern of variable errors was indeed because of trilateration. We fit each participant’s variable errors with a probabilistic model of optimal trilateration (Fig. 1A,B) that was derived from its theoretical formulation (see Materials and Methods). We compared the trilateration model to an alternative hypothesis: the inverted U-shaped pattern is because of truncation at the boundaries of localization (Petzschner et al., 2015), which cuts off the range of possible responses and thus produces lower variability at these boundaries. We fit a boundary truncation model to directly compare to our trilateration model. Given the lack of a main effect of task and to increase statistical power, we collapsed across both tasks sin this analysis.

Our computational model of trilateration provided a good fit to the variable errors observed during tactile localization on a tool. This was evident at the group-level, where the magnitude of variable errors was similar to what has been found when localizing touch on the arm (Fig. 5A). We further observed a high coefficient of determination at the level of individual participants (median R2: 0.75; range: 0.29–0.95); indeed, 30 out of 38 participants had an R2 > 0.6. The fits of the trilateration model to the data of six randomly chosen participants can be seen in Figure 5B. The fits of the trilateration model each participant’s behavior can be seen in Extended Data Figures 5-1 and 5-2. In contrast, the R2 of the boundary truncation model was substantially lower than the trilateration model (median: 0.30; range: −0.19–0.71), never showing a better fit to the data in any participant (Fig. 6A).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Trilateration model provides a good fit to localization behavior. A, Fit of the trilateration model to the group-level variable error (black dots). The purple line corresponds to the model fit. The light gray line and squares correspond to variable errors for localization on the arm observed in Miller et al. (2022); note that these data are size adjusted to account for differences in arm and rod size. B, Fit of the trilateration model to the variable errors of six randomly chosen participants. The fit of the trilateration model for each participant’s behavior can be seen in Extended Data Figures 5-1 and 5-2.

Extended Data Figure 5-1

Trilateration model fits for participants S1–S19

Fit of the trilateration model to the variable error (black dots) of participants S1–S19 (top-to-bottom; left-to-right). The purple line corresponds to the model fit. The goodness of fit is displayed as the R2. Download Figure 5-1, EPS file.

Extended Data Figure 5-2

Trilateration model fits for participants S20–S38

Fit of the trilateration model to the variable error (black dots) of participants S20–S38 (top-to-bottom; left-to-right). The purple line corresponds to the model fit. The goodness of fit is displayed as the R2. Download Figure 5-2, EPS file.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Trilateration provides a better fit to the data than boundary truncation. A, Participant-level goodness of fits (R2) for the trilateration model (left, purple) and the boundary truncation model (right, green). For each participant, trilateration was a better fit to the data. B, Histogram of the ΔBIC values used to adjudicate between the two models, color-coded by the strength of the evidence in favor of trilateration. Purple corresponds to substantial evidence in favor of trilateration; pink corresponds to moderate evidence in favor of trilateration; gray corresponds to weak/equivocal evidence in favor of trilateration. Note that in no case did the boundary truncation model provide a better fit to the localization data (i.e., ΔBIC < 0).

We next compared each model directly using the Bayesian Information Criteria (BIC). The BIC score for the trilateration model was lower in all 38 participants (mean ± SD; trilateration: 11.88 ± 5.88; truncation: 18.74 ± 4.70). Statistically, 33 participants showed moderate evidence (ΔBIC > 2) and 20 participants showed strong evidence (ΔBIC > 6) in favor of trilateration (Fig. 6B). In total, our results strongly suggest that, as with the body, localizing touch on a tool is consistent with a computation via trilateration.

Neural network simulations

Finally, we simulated trilateration on a tool using a biologically inspired neural network with a similar architecture as we have done previously. The goal of these simulations was to concretely demonstrate that the feature space of vibratory motifs could stand in for the physical space of the rod. Our neural network thus took the mode amplitudes as input and trilaterated the resulting touch location in tool-centered coordinates (5000 simulations per location).

Both the mode and feature space layers of the neural network (Fig. 3, bottom and middle) produced unbiased sensory estimates with minimal uncertainty (Extended Data Fig. 7-1). Crucially, both subpopulations in the distance-computing layer (layer 3; Fig. 3, top) were able to localize touch with minimal constant error (Fig. 7A, upper panel), demonstrating that each could produce unbiased estimates of location from the sensory input. However, as predicted given the gradient in tuning parameters, the noise in their estimates rapidly increased as a function of distance from each landmark (Fig. 7B, upper panel), forming an X-shaped pattern across the surface of the tool.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Neural network simulations. A, Localization accuracy for the estimates of each decoding subpopulation (upper panel; L1, blue; L2, red) and after integration by the Bayesian decoder (lower panel; LINT, purple). B, Decoding noise for each decoding subpopulation (upper panel) increased as a function of distance from each landmark. Note that distance estimates are made from the 10% and 90% locations for the first (blue) and second (red) decoding subpopulations, respectively. Integration via the Bayesian decoder (lower panel) led to an inverted U-shaped pattern across the surface. Note the differences in the y-axis range for both panels. The results of decoding for the mode and feature space layers of the network can be seen in Extended Data Figure 7-1.

Extended Data Figure 7-1

Intermediate output of the Mode and Feature layers

(A) Localization accuracy for the sensory estimates decoded from Mode (top panel) and Feature layers (bottom panel). Note that the ‘location’ decoded here is best conceptualized as within the vibratory feature space, as spatial localization is done via trilateration at higher layers of the network (B) Uncertainty in the sensory estimates decoded from the Mode (top panel) and the Feature layers (bottom panel). Download Figure 7-1, EPS file.

We next examined the output of the Bayesian decoder from Equations 11, 12 (Fig. 7, lower panel). As expected, we observed the computational signature of trilateration. Integrating both estimates resulted in an inverted U-shaped pattern of decoding noise across the surface of the tool (Fig. 7B, lower panel), with the lowest decoding noise near the landmarks and the highest decoding variance in the middle. Crucially, this is the exact pattern of variability we observed in our behavioral experiments (see above) and have previously observed for tactile localization on the body. These simulations establish the plausibility of trilateration as a computation that can turn a vibratory code into a spatial representation.

Discussion

If tools are embodied by the sensorimotor system, we would expect that the brain repurposes its body-based sensorimotor computations to perform similar tasks with tools. Using tactile localization as our case study, we uncovered multiple pieces of evidence that are consistent with this embodied view. First, as is the case for body parts, we observed that localizing touch on the surface of a tool is characterized by perceptual anchors at the handle and tip (de Vignemont et al., 2009). Second, computational modeling of behavioral responses suggests that they are the result of the probabilistic computation involving trilateration. Indeed, perceptual anchors are a computational signature of trilateration. Finally, using a simple three-layer population-based neural network, we demonstrated the possibility of trilateration in the vibratory feature space evoked by touches on tools. This neural network transformed a vibration-based input into a spatial code, reproducing perceptual anchors on the tool surface. These findings go well beyond prior research on embodiment (Martel et al., 2016) by identifying a computation that functionally unifies tools and limbs. Indeed, they suggest that trilateration is a spatial computation employed by the somatosensory system to localize touch on body parts and tools alike (Miller et al., 2022). They further have important implications for how trilateration would be repurposed at a neural level for tool-extended sensing.

If trilateration is a fundamental spatial computation used by the somatosensory system, it should be employed to solve the same problem (i.e., localization) regardless of whether the sensory surface is the body or a tool. Previous tactile localization studies have reported increased perceptual precision near the boundaries of the hands (Elithorn et al., 1953; Miller et al., 2022), arm (Cholewiak and Collins, 2003; de Vignemont et al., 2009; Miller et al., 2022), feet (Halnan and Wright, 1960), and abdomen (Cholewiak et al., 2004). These perceptual anchors are a signature of a trilateration computation (Miller et al., 2022). The results of the present study are consistent with the use of trilateration to localize touch on tools as well.

Our findings provide computational evidence that tools are embodied in the sensorimotor system (Martel et al., 2016), an idea that was proposed over a century ago (Head and Holmes, 1911). The close functional link between tools and limbs is not just a superficial resemblance but rather a reflection of the repurposing of neurocomputational resources dedicated to sensing and acting with a limb to that with a tool (Makin et al., 2017). This repurposing may be one reason that tool use leads to measurable changes in body perception and action processes (Cardinali et al., 2009; Canzoneri et al., 2013; Miller et al., 2014, 2019a).

Whereas the present study focused on simply-shaped tools (i.e., straight rods), tactile localization is also possible on more complexly-shaped tools (Yamamoto et al., 2005). We propose that trilateration also underlies tactile localization on these tools. We leveraged our trilateration model to simulate patterns of tactile localization on rods with different numbers of segments (Fig. 8). For multisegmented limbs (e.g., the arm), trilateration occurs locally within each segment (Cholewiak and Collins, 2003; Miller et al., 2022). That is, the signature inverted U-shaped pattern of variability is observed within each segment (e.g., upper and lower arms). Our simulations suggested that the same would be true for multisegmented tools (Fig. 8B). We predict that tactile localization within each segment of a rod would be characterized by the signature pattern of variability indicative of trilateration.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Simulations of multisegmented rods. We simulated how trilateration operates within rods with different numbers of segments. Here, we show the predicted patterns of variability for (A) a single-segment rod (used in present study) and (B) two-segment (left) and three-segment (right) rods. The magnitude of variable error is color-coded as red-to-blue (low-to-high). The inverted U-shaped pattern of variability was observed in each segment.

Although trilateration was repurposed for localizing touch on a rod, we observed a noticeable difference in the overall shape of variable error between localizing touch on a rod versus limb (e.g., the arm; Fig. 5A). Whereas localization uncertainty (i.e., variable error) is typically symmetric about the center of a limb (Miller et al., 2022), uncertainty was asymmetric for the rod. Specifically, variable errors were lower near the handle than the tip, peaking away from the center of the rod and toward the tip. These patterns of variable error were also visible in the behavior of individual participants (Extended Data Figs. 5-1 and 5-2) and are a direct consequence of differences in the baseline uncertainty of each distance estimate (Eq. 13), as demonstrated by simulations in Miller et al. (2022).

There are at least two potential sources for these differences in baseline uncertainty. First, striking the rod near the tip may produce less consistent sensory information (i.e., vibrations), translating into greater sensory uncertainty of where the rod is touched. However, this explanation is unlikely since the hypothesized differences in sensory consistency were not observed in a previous study that characterized a rod’s vibratory motifs (Miller et al., 2018). Instead, the source of this difference may lie in the uncertainty of where each boundary is perceived in space via proprioceptive feedback (Eq. 3). The location of the handle is well-defined, as it corresponds to the position of the hand. The location of the tip is less well defined, as it must be extracted indirectly from proprioceptive feedback from the forelimb (Debats et al., 2012). This likely corresponds to higher estimation uncertainty of its position in space, contributing to greater baseline uncertainty of the tip-based distance estimate (Eq. 13). Future studies should attempt to adjudicate between these two hypotheses.

Another important difference between limbs and tools is the sensory input used to derive localization estimates. While the skin is innervated with sensory receptors, the somatosensory system must “tune into” a tool’s mechanical response to extract meaningful information from it. It was previously proposed that where a rod is touched is encoded by the amplitudes of its resonant responses when contacting an object (Miller et al., 2018, 2019b). These resonant modes form a feature space that is isomorphic with the physical space of the tool. At a peripheral level, these resonances are re-encoded by the spiking patterns of tactile mechanoreceptors (Johnson, 2001). Therefore, unlike for touch on the body, localizing touch on a tool requires the somatosensory system to perform a temporal-to-spatial transformation.

We used neural network simulations to embody the necessary transformations to implement trilateration on a tool. Our neural network assumes that the human brain contains neural populations that encode for the full feature space of rod vibration. While very little is known about how these types of naturalistic vibrations are represented by the somatosensory system, our modeling results and prior research (Miller et al., 2018, 2019a) suggest that there are neural populations that encode their properties. Previous work demonstrated that individual neurons in primary somatosensory cortex multiplex both amplitude and frequency in their firing properties (Harvey et al., 2013). Recent evidence further suggests that human S1 is tuned to individual vibration frequencies (Wang and Yau, 2021). Our neural network modeling assumes that there are also neurons tuned to the amplitude of specific frequencies, though direct empirical evidence for this tuning is currently lacking. The existence of this coding would be consistent with the finding that S1 performs the initial stages of localization on a rod (Miller et al., 2019a). Furthermore, resonant amplitudes are crucial pieces of information in the natural statistics of vibrations, making it plausible that they are encoded at some stage of processing. Our results therefore open up a new avenue for neurophysiological investigations into how naturalistic vibrations are encoded by the somatosensory system.

The present study demonstrates the biological possibility that the resonant feature space can stand in for the physical space of the tool, allowing for trilateration to be performed to localize touch in tool-centered coordinates. It is interesting to note that the present neural network had a similar structure to one we previously demonstrated could perform trilateration on the body surface. The biggest difference is the input layer, which must first encode the vibration information. However, once this is transformed into the representation of the feature space, the computation proceeds as it would for the body. Note that this does not necessitate that the same neural populations localize touch on limbs and tools (Schone et al., 2021), but only that the same computation is performed when localizing touch on both surfaces. Our network therefore provides a concrete demonstration of what it means to repurpose a body-based computation to localize touch on a tool. The repurposing of the neural network architecture for trilateration explains tool embodiment and the emergence of a shared spatial code between tools and skin.

Footnotes

  • The authors declare no competing financial interests.

  • Received March 22, 2023.
  • Revision received September 13, 2023.
  • Accepted September 25, 2023.
  • Copyright © 2023 Miller et al.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. Canzoneri E, Ubaldi S, Rastelli V, Finisguerra A, Bassolino M, Serino A (2013) Tool-use reshapes the boundaries of body and peripersonal space representations. Exp Brain Res 228:25–42. https://doi.org/10.1007/s00221-013-3532-2 pmid:23640106
  2. Cardinali L, Frassinetti F, Brozzoli C, Urquizar C, Roy AC, Farnè A (2009) Tool-use induces morphological updating of the body schema. Curr Biol 19:R478–R479. https://doi.org/10.1016/j.cub.2009.05.009 pmid:19549491
  3. Cardinali L, Brozzoli C, Urquizar C, Salemme R, Roy A, Farnè A (2011) When action is not enough: tool-use reveals tactile-dependent access to body schema. Neuropsychologia 49:3750–3757. https://doi.org/10.1016/j.neuropsychologia.2011.09.033 pmid:21971306
  4. Cardinali L, Jacobs S, Brozzoli C, Frassinetti F, Roy AC, Farnè A (2012) Grab an object with a tool and change your body: tool-use-dependent changes of body representation for action. Exp Brain Res 218:259–271. https://doi.org/10.1007/s00221-012-3028-5 pmid:22349501
  5. Cardinali L, Brozzoli C, Finos L, Roy A, Farnè A (2016) The rules of tool incorporation: tool morpho-functional and sensori-motor constraints. Cognition 149:1–5. https://doi.org/10.1016/j.cognition.2016.01.001 pmid:26774102
  6. Cholewiak RW, Collins AA (2003) Vibrotactile localization on the arm: effects of place, space, and age. Percept Psychophys 65:1058–1077. https://doi.org/10.3758/bf03194834 pmid:14674633
  7. Cholewiak RW, Brill JC, Schwab A (2004) Vibrotactile localization on the abdomen: effects of place and space. Percept Psychophys 66:970–987. https://doi.org/10.3758/bf03194989 pmid:15675645
  8. Clemens IA, De Vrijer M, Selen LP, Van Gisbergen JA, Medendorp WP (2011) Multisensory processing in spatial orientation: an inverse probabilistic approach. J Neurosci 31:5365–5377. https://doi.org/10.1523/JNEUROSCI.6472-10.2011 pmid:21471371
  9. Debats NB, Kingma I, Beek PJ, Smeets JB (2012) Moving the weber fraction: the perceptual precision for moment of inertia increases with exploration force. PLoS One 7:e42941. https://doi.org/10.1371/journal.pone.0042941 pmid:23028437
  10. Delhaye BP, Long KH, Bensmaia SJ (2018) Neural basis of touch and proprioception in primate cortex. Compr Physiol 8:1575–1602.
  11. de Vignemont F, Majid A, Jola C, Haggard P (2009) Segmenting the body into parts: evidence from biases in tactile perception. Q J Exp Psychol (Hove) 62:500–512. https://doi.org/10.1080/17470210802000802 pmid:18609376
  12. Elithorn A, Piercy MF, Crosskey MA (1953) Tactile localization. Quart J Exp Psychol 5:171–182. https://doi.org/10.1080/17470215308416640
  13. Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433. https://doi.org/10.1038/415429a pmid:11807554
  14. Fabio C, Salemme R, Koun E, Farnè A, Miller LE (2022) Alpha oscillations are involved in localizing touch on a tool. J Cogn Neurosci 34:675–686. https://doi.org/10.1162/jocn_a_01820 pmid:35061032
  15. Faisal AA, Selen LP, Wolpert DM (2008) Noise in the nervous system. Nat Rev Neurosci 9:292–303. https://doi.org/10.1038/nrn2258 pmid:18319728
  16. Garbarini F, Fossataro C, Berti A, Gindri P, Romano D, Pia L, della Gatta F, Maravita A, Neppi-Modona M (2015) When your arm becomes mine: pathological embodiment of alien limbs using tools modulates own body representation. Neuropsychologia 70:402–413. https://doi.org/10.1016/j.neuropsychologia.2014.11.008 pmid:25448852
  17. Halnan CR, Wright GH (1960) Tactile localization. Brain 83:677–700. https://doi.org/10.1093/brain/83.4.677 pmid:13710893
  18. Harvey MA, Saal HP, Dammann JF 3rd., Bensmaia SJ (2013) Multiplexing stimulus information through rate and temporal codes in primate somatosensory cortex. PLoS Biol 11:e1001558. https://doi.org/10.1371/journal.pbio.1001558 pmid:23667327
  19. Head H, Holmes G (1911) Sensory disturbances from cerebral lesions. Brain 34:102–254. https://doi.org/10.1093/brain/34.2-3.102
  20. Imamizu H, Miyauchi S, Tamada T, Sasaki Y, Takino R, Pütz B, Yoshioka T, Kawato M (2000) Human cerebellar activity reflecting an acquired internal model of a new tool. Nature 403:192–195. https://doi.org/10.1038/35003194 pmid:10646603
  21. Jazayeri M, Movshon JA (2006) Optimal representation of sensory information by neural populations. Nat Neurosci 9:690–696. https://doi.org/10.1038/nn1691 pmid:16617339
  22. Johnson KO (2001) The roles and functions of cutaneous mechanoreceptors. Current opinion in neurobiology 11:455–461.
  23. Johansson RS, Flanagan JR (2009) Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat Rev Neurosci 10:345–359. https://doi.org/10.1038/nrn2621 pmid:19352402
  24. Kilteni K, Ehrsson HH (2017) Sensorimotor predictions and tool use: hand-held tools attenuate self-touch. Cognition 165:1–9. https://doi.org/10.1016/j.cognition.2017.04.005 pmid:28458089
  25. Körding KP, Wolpert DM (2004) Bayesian integration in sensorimotor learning. Nature 427:244–247. https://doi.org/10.1038/nature02169 pmid:14724638
  26. Longo MR, Azañón E, Haggard P (2010) More than skin deep: body representation beyond primary somatosensory cortex. Neuropsychologia 48:655–668. https://doi.org/10.1016/j.neuropsychologia.2009.08.022 pmid:19720070
  27. Ma WJ, Beck JM, Latham PE, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9:1432–1438. https://doi.org/10.1038/nn1790 pmid:17057707
  28. Makin TR, de Vignemont F, Faisal AA (2017) Neurocognitive barriers to the embodiment of technology. Nat Biomed Eng 1:0014. https://doi.org/10.1038/s41551-016-0014
  29. Maravita A, Iriki A (2004) Tools for the body (schema). Trends Cogn Sci 8:79–86. https://doi.org/10.1016/j.tics.2003.12.008 pmid:15588812
  30. Martel M, Cardinali L, Roy AC, Farnè A (2016) Tool-use: an open window into body representation and its plasticity. Cogn Neuropsychol 33:82–101. https://doi.org/10.1080/02643294.2016.1167678 pmid:27315277
  31. Martel M, Cardinali L, Bertonati G, Jouffrais C, Finos L, Farnè A, Roy AC (2019) Somatosensory-guided tool use modifies arm representation for action. Sci Rep 9:5517. https://doi.org/10.1038/s41598-019-41928-1 pmid:30940857
  32. Medina J, Coslett HB (2010) From maps to form to space: touch and the body schema. Neuropsychologia 48:645–654. https://doi.org/10.1016/j.neuropsychologia.2009.08.017 pmid:19699214
  33. Miller LE, Longo MR, Saygin AP (2014) Tool morphology constrains the effects of tool use on body representations. J Exp Psychol Hum Percept Perform 40:2143–2153. https://doi.org/10.1037/a0037777 pmid:25151100
  34. Miller LE, Longo MR, Saygin AP (2017) Visual illusion of tool use recalibrates tactile perception. Cognition 162:32–40. https://doi.org/10.1016/j.cognition.2017.01.022 pmid:28196765
  35. Miller LE, Montroni L, Koun E, Salemme R, Hayward V, Farnè A (2018) Sensing with tools extends somatosensory processing beyond the body. Nature 561:239–242. https://doi.org/10.1038/s41586-018-0460-0 pmid:30209365
  36. Miller LE, Fabio C, Ravenda V, Bahmad S, Koun E, Salemme R, Luauté J, Bolognini N, Hayward V, Farnè A (2019a) Somatosensory cortex efficiently processes touch located beyond the body. Curr Biol 29:4276–4283.e5. https://doi.org/10.1016/j.cub.2019.10.043 pmid:31813607
  37. Miller LE, Longo MR, Saygin AP (2019b) Tool use modulates somatosensory cortical processing in humans. J Cogn Neurosci 31:1782–1795. https://doi.org/10.1162/jocn_a_01452 pmid:31368823
  38. Miller LE, Fabio C, Azaroual M, Muret D, van Beers R, Farnè A, Medendorp WP (2022) A neural surveyor to map touch on the body. Proc Natl Acad Sci U S A 119:e2102233118. https://doi.org/10.1073/pnas.2102233118
  39. Nieder A, Miller EK (2003) Coding of cognitive magnitude: compressed scaling of numerical information in the primate prefrontal cortex. Neuron 37:149–157. https://doi.org/10.1016/s0896-6273(02)01144-3 pmid:12526780
  40. Pazen M, Uhlmann L, van Kemenade BM, Steinsträter O, Straube B, Kircher T (2020) Predictive perception of self-generated movements: commonalities and differences in the neural processing of tool and hand actions. Neuroimage 206:116309. https://doi.org/10.1016/j.neuroimage.2019.116309 pmid:31669300
  41. Penfield W, Boldrey E (1937) Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain 60:389–443.
  42. Petzschner FH, Glasauer S, Stephan KE (2015) A Bayesian perspective on magnitude estimation. Trends Cogn Sci 19:285–293. https://doi.org/10.1016/j.tics.2015.03.002 pmid:25843543
  43. Pouget A, Beck JM, Ma WJ, Latham PE (2013) Probabilistic brains: knowns and unknowns. Nat Neurosci 16:1170–1178. https://doi.org/10.1038/nn.3495 pmid:23955561
  44. Romano D, Uberti E, Caggiano P, Cocchini G, Maravita A (2019) Different tool training induces specific effects on body metric representation. Exp Brain Res 237:493–501. https://doi.org/10.1007/s00221-018-5405-1 pmid:30460395
  45. Schone HR, Mor ROM, Baker CI, Makin TR (2021) Expert tool users show increased differentiation between visual representations of hands and tools. J Neurosci 41:2980–2989.
  46. Sposito A, Bolognini N, Vallar G, Maravita A (2012) Extension of perceived arm length following tool-use: clues to plasticity of body metrics. Neuropsychologia 50:2187–2194. https://doi.org/10.1016/j.neuropsychologia.2012.05.022 pmid:22683448
  47. Wang L, Yau JM (2021) Signatures of vibration frequency tuning in human neocortex. bioRxiv 462923. https://doi.org/10.1101/2021.10.03.462923.
  48. Yamamoto S, Kitazawa S (2001) Sensation at the tips of invisible tools. Nat Neurosci 4:979–980. https://doi.org/10.1038/nn721 pmid:11544483
  49. Yamamoto S, Moizumi S, Kitazawa S (2005) Referral of tactile sensation to the tips of L-shaped sticks. J Neurophysiol 93:2856–2863. https://doi.org/10.1152/jn.01015.2004 pmid:15634708

Synthesis

Reviewing Editor: Ifat Levy, Yale School of Medicine

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Benoit Delhaye.

This paper largely extends previous work, that proposed that trilateration is a fundamental computation used by the somatosensory system to localize touch on the limbs, to the problem of contact localization on a tool.

In a perceptual experiment involving a relatively large sample (n=38), the authors reproduce previous findings (Miller et al. 2018) that humans can localize contact points on a rod that is actively hit by a participant. Importantly, the results show that the variability of the reported location error is the highest in the middle of the rod and the lowest on the boundaries, a signature of trilateration, which provides support for a similar neural computation across body and tools. In a modeling attempt, the authors extend a previously published neural network (Miller et al 2022) to account for the particular neural signals related to localization on a tool. Indeed, contact with different body parts will activate different tactile fibers, which will activate different subareas in the somatosensory cortex. However, contact on different locations of the rod will likely activate very similar if not identical tactile fibers, but with subtle differences in the timing of activation. To account for that major difference, the previous neural network is endowed with an additional input layer that is aimed at performing a modal decomposition of the input signal to extract location from the vibration signals related to the rod.

The paper is clear and well written, and the extension of this group’s computational approach is straightforward. There are, however, a few issues that should be addressed before the paper can be accepted.

- The abstract and introduction should be more specific about the central contribution of this paper. It is about the difference (and the potential similarities) between the neural computation related to the body and the tool, or the additional computation related to the extraction of the location on the rod from the vibratory signal. Those sections should be more explicit about this fundamental difference in the input signal.

- Similarly, the modeling results only show the final output of the network, and do not provide the intermediate output given by the decoding of the modal decomposition of the vibratory signal. This is the specific contribution of this study, as the next layers are the same as in previous work (Miller et al 2022). It also assumes that the modal decomposition from the tactile signals has been done “perfectly”. It is not clear why this assumption is made and how this computation can be implemented.

- In figure 1, can the authors speculate on why the trilateration model data fit does not fully match the actual subject data prom Miller et al., 2022? What would be different in arm vs. tool localization to account for this?

- It will be helpful if all single-subject data are presented as extended data figure for plots in figure 1B. In the few selected subjects, it is interesting that some subjects tend to be centered at 30 and match the arm localization variable error (the bottom two right) while others don’t. Related to the previous question, why would this be the case?

- In page 23, lines 449-453, please present medians rather than means of data.

- In page 23, lines 449-453, please show the plots of R2 for the trilateration model vs. R2 for the boundary truncation model either as a figure or as extended data. This will also provide a sense of the dispersion of the R2 in addition to the actual benefit of the trilateration model. It would be useful to color code the BIC on this figure.

- One important simplification in the model is to consider limbs and tools as ideal linear objects. How would trilateration behave in the face of complex-shaped objects? In particular, can trilateration predict perceptual illusions from objects of specific complex shapes? This would be a very nice confirmation of the implementation of trilateration in the brain.

Minor comments:

210: model -> mode

427: weird to write “we” when the two references are not from the same authors

458: “Touch on a tool is localized via trilateration.” -> is consistent with...

502: “They demonstrate that trilateration is the spatial computation employed by the somatosensory system to localize touch on body parts and tools alike” This is a very strong statement. This study demonstrates the possibility or plausibility of using such computation.

Code: the find(max()) can be obtained directly by the second argument of the max() function, which is much faster

  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.