A Somatosensory Computation That Unifies Limbs and Tools

Abstract It is often claimed that tools are embodied by their user, but whether the brain actually repurposes its body-based computations to perform similar tasks with tools is not known. A fundamental computation for localizing touch on the body is trilateration. Here, the location of touch on a limb is computed by integrating estimates of the distance between sensory input and its boundaries (e.g., elbow and wrist of the forearm). As evidence of this computational mechanism, tactile localization on a limb is most precise near its boundaries and lowest in the middle. Here, we show that the brain repurposes trilateration to localize touch on a tool, despite large differences in initial sensory input compared with touch on the body. In a large sample of participants, we found that localizing touch on a tool produced the signature of trilateration, with highest precision close to the base and tip of the tool. A computational model of trilateration provided a good fit to the observed localization behavior. To further demonstrate the computational plausibility of repurposing trilateration, we implemented it in a three-layer neural network that was based on principles of probabilistic population coding. This network determined hit location in tool-centered coordinates by using a tool’s unique pattern of vibrations when contacting an object. Simulations demonstrated the expected signature of trilateration, in line with the behavioral patterns. Our results have important implications for how trilateration may be implemented by somatosensory neural populations. We conclude that trilateration is likely a fundamental spatial computation that unifies limbs and tools.

It is often claimed that tools are embodied by the user, but computational evidence for this claim is scarce.We show that to localize touch on a tool, the brain repurposes a fundamental computation for localizing touch on the body, trilateration.A signature of trilateration is high localization precision near the boundaries of a limb and low precision in the middle.We find that localizing touch on a tool produces this signature of trilateration, which we characterize using a computational model.We further demonstrate the plausibility of embodiment by implementing trilateration within a three-layer neural network that transforms a tool's vibrations into a tool-centered spatial representation.We conclude that trilateration is a fundamental spatial computation that unifies limbs and tools.

Introduction
The proposal that the brain treats a tool as if it were an extended limb (tool embodiment) was first made over a century ago (Head and Holmes, 1911).From the point of view of modern neuroscience, embodiment would entail that the brain reuses its sensorimotor computations when performing the same task with a tool as with a limb.There is indirect evidence that this is the case (for review, see Maravita and Iriki, 2004;Martel et al., 2016), such as the ability of tool-users to accurately localize where a tool has been touched (Miller et al., 2018) just as they would on their own body.Several studies have highlighted important similarities between tool-based and body-based tactile spatial processing (Yamamoto and Kitazawa, 2001;Kilteni and Ehrsson, 2017;Miller et al., 2018), including at the neural level in the activity of fronto-parietal regions (Miller et al., 2019a;Pazen et al., 2020;Fabio et al., 2022).Tool use also modulates somatosensory perception and action processes (Cardinali et al., 2009(Cardinali et al., , 2011(Cardinali et al., , 2012(Cardinali et al., , 2016;;Sposito et al., 2012;Canzoneri et al., 2013;Miller et al., 2014Miller et al., , 2017Miller et al., , 2019b;;Garbarini et al., 2015;Martel et al., 2019;Romano et al., 2019).
The above findings are suggestive that functional similarities between tools and limbs exist.However, direct evidence that body-based computational mechanisms are repurposed to sense and act with tools is lacking.For this to be possible, the nervous system would need to resolve the differences in the sensory input following touch on the skin or a tool.Unlike the skin, tools are not innervated with mechanoreceptors.Touch location is instead initially encoded in the tool's mechanical response; for example, in how it vibrates when striking an object (Miller et al., 2018).Repurposing a body-based neural computation to perform the same function for a tool (i.e., embodiment) requires overcoming this key difference in the sensory input signal.The present study uses tool-based tactile localization (Miller et al., 2018) as a case study to provide the first neurocomputational test of embodiment.
Tactile localization on the body is often characterized by greater precision near body-part boundaries (e.g., joints or borders), a phenomenon called perceptual anchoring (Cholewiak and Collins, 2003;de Vignemont et al., 2009).A recent study found converging evidence that perceptual anchors are the signature of trilateration (Miller et al., 2022), a computation used by surveyors to localize an object within a map.To do so, a surveyor estimates the object's distance from multiple landmarks of known positions.When applied to body maps (Fig. 1A, bottom), a "neural surveyor" localizes touch on a body part by estimating the distance between sensory input and body-part boundaries (e.g., the wrist and elbow for the forearm).To estimate the touch location in limb-centered coordinates, these two distance estimates can be integrated to produce a Bayes-optimal location percept (Ernst and Banks, 2002;Körding and Wolpert, 2004;Clemens et al., 2011).Consistent with Weber's Law and log-coded spatial representations (Petzschner et al., 2015), noise in each distance estimate increased linearly as a function of distance (Fig. 1B).Integrating them resulted in an inverted U-shaped noise profile across the surface, with the lowest noise near the boundaries and highest noise in the middle (i.e., perceptual anchoring).
In the present study, we investigated whether trilateration is repurposed to localize touch a tool (Fig. 1A).If this is indeed the case, localizing touch on a tool would be characterized by an inverted U-shaped pattern of variable errors across its surface (Fig. 1B).We first provide a theoretical formulation of how trilateration could be repurposed to sense with a tool, arguing that the brain uses the tool's vibrational properties to stand-in for a representation for the physical space of the tool (Miller et al., 2018).In this formulation, trilateration is repurposed by computing over a vibratory feature space (Fig. 2), using its boundaries as proxies for the boundaries of physical tool space.Distance estimates (Fig. 1A) are then computed within a neural representation of the feature space, just like they would be for a representation of body space.Next, we characterize the ability of participants to localize touch on a tool (Fig. 1C) and use computational modeling to verify the expected computational signature of trilateration (Fig. 1B).Finally, we use neural network modeling to implement the vibration-to-location transformation required for trilaterating touch location on a tool, providing one possible mechanism for how embodiment is implemented.In all, our findings solidify the plausibility of trilateration as the computation underlying tactile localization on both limbs and tools.

Theoretical formulation of trilateration
In the present section, we provide a theoretical formulation of trilateration and how it can be applied to localizing touch within a somatosensory-derived coordinate system, be it centered on a body part or the surface of a tool (Fig. 1A).The general computational goal of trilateration is to estimate the location of an object by calculating its distance from vantage points of known position, which we will refer to as landmarks.Applied to tactile localization, this amounts to estimating the location of touch by averaging over distance estimates taken from the boundaries of the sensory surface (Fig. 1A), which serve as the landmarks and are assumed to be known to the nervous system via either learning or sensory feedback (Longo et al., 2010).For a body part (e.g., forearm), the landmarks are often its joints (e.g., wrist and elbow) and lateral sides.For simple tools such as rods, the landmarks correspond to their handle and tip, previous research has shown that users can sense their positions from somatosensory feedback during wielding (Debats et al., 2012).
We will first consider the general case of localizing touch within an unspecified somatosensory coordinate system.For simplicity, we will consider only a single dimension of the coordinate system, with localization between its two boundaries.We propose that the somatosensory system only needs three spatial variables, x 1 ; x 2 ; x 3 f g , to derive an estimate L of the actual location of touch L in surfacecentered coordinates.The variables x 1 and x 2 correspond to the proximal and distal boundaries, respectively.The variable x 3 corresponds to the sensory input.Because of noise (Faisal et al., 2008), the nervous system does not represent variables as point estimates but as probability densities over some range of values (Pouget et al., 2013).Assuming normally-distributed noise, each variable x i can be thus thought of as a Gaussian likelihood: where the mean X i corresponds to its true spatial position and the variance s 2 i corresponds to the uncertainty in its internal estimate.Here, X 1 and X 2 are the true positions of the landmarks (i.e., boundaries) and X 3 is the position of the sensory input.It is important to note here that these positions can be specified within any shared coordinate system.For example, touch on the body is thought to initially be represented in skin-based coordinates (Medina and Coslett, 2010), not coordinates centered on a limb.
The relationship between X 3 and L therefore remains ambiguous without the proper computation to transform it into the actual surface-centered coordinates (Longo et al., 2010).
Trilateration performs the necessary computation to transform x 3 into surface-centered coordinates (Miller et al., 2022).It does so by calculating its distance from the proximal and distal boundaries of the coordinate system (x 1 and x 2 , respectively), producing two additional estimates: where each distance estimate d i corresponds to a Gaussian likelihood with a mean equal to the distance between X 3 and the respective boundary and a variance that scales with distance.That is, localization estimates are more precise when the touch is physically closer to a boundary than when it is farther away (Fig. 1B).This distance-dependent noise is consistent with coding distance in log space (Petzschner et al., 2015) and is a consequence of how distance computation is implemented by a neural decoder (see below).

A B
Figure 2. Vibration modes and feature space.A, The shape of the first five modes v for contact on a cantilever rod.The weight of each mode varies as a function of hit location.Each hit location is characterized by a unique combination of mode weights.B, The vibration-location feature space (purple) from handle (X 1 ) to tip (X 2 ).This feature space is isomorphic with the actual physical space of the rod.v corresponds to a resonant frequency, the black dot corresponds to the hit location (as in Fig. 1A) within the feature space, and the arrows are the gradients of distance estimation during trilateration.

A B C
Figure 1.Model of trilateration and tool-sensing paradigm.A, The trilateral computation applied to the space of the arm (bottom) a hand-held rod (top).Distance estimates from sensory input (black) and each boundary (d1 and d2) are integrated (purple) to form a location estimate.B, In our model, the noise in each distance estimate (d1, d2) increases linearly with distance.The integrated estimate forms an inverted U-shaped pattern.C, Two tool-sensing tasks used to characterize tactile localization on a hand-held rod.
The purple arrow corresponds to the location of touch in tool-centered space.The red square corresponds to the judgment of location within the computer screen.
Given the above distance estimates (Eq.2), we can derive two estimates of touch location Li that are aligned within a common coordinate system: These two location estimates can be used to derive a final estimate.However, given the presence of distancedependent noise, the precision of each estimate will vary across the sensory surface (Fig. 1B).Assuming a flat prior for touch location, the statistically optimal solution (i.e., maximum likelihood) is to integrate both estimates: Here, the mean (m INT Þ and variance (s 2 INT Þ of the integrated surface-centered posterior distribution depend on the means (m 1 and m 2 ) variances (s 2 1 and s 2 2 ) of the individual estimates: The integrated posterior pðLj L1 ; L2 Þ thus reflects the maximum-likelihood estimate of touch location L: Given that the noise in each individual estimate scales linearly with distance, integration has the consequence of producing an inverted U-shaped pattern of variance (Fig. 1B).This pattern of variability serves as a computational signature of trilateration, which has been observed for tactile localization on the arm and fingers (Miller et al., 2022).The present study investigates whether this is the case for localizing touch on a hand-held rod.Our computational analyses implement this probabilistic model of trilateration (see below).

Computing a tool-centered spatial code with trilateration
Let us now consider the more specific case of performing trilateration for touch on a tool (Fig. 1A, top).Because the tool surface is not innervated, spatial information does not arise from a distribution of receptors but must instead be inferred from sensory information during tool-object contact.However, as we will see, this information forms a feature space that can computationally stand in for the real physical space of the tool (Fig. 2).Trilateration can be performed on this feature space, leading to a tool-centered code.
As with the body, the somatosensory system needs three variables, x 1 ; x 2 ; x 3 f g , to derive an estimate L of the actual location of touch L in tool-centered coordinates.The representational nature of these variables depends on the type of sensory information that encodes where a tool was touched.We have previously argued that touch location is encoded in rod's resonant frequencies (Miller et al., 2018).The frequencies of these modes are determined by the physical properties of the rod, such as its length and material.However, the relative amplitude of each mode is determined by touch location (Fig. 2A), a pattern that is invariant across rods.The link between location and amplitude is captured by the shape of the modes.
Touch location can therefore be encoded in a unique combination of modal amplitudes, called vibratory motifs.These motifs form a multidimensional feature space that forms a vibration-to-location isomorphism (Fig. 2B).Theoretically, this isomorphic mapping between the feature space of the vibrations and tool-centered space can computationally stand in for the physical space of the rod.We can therefore re-conceptualize the three initial spatial variables, x 1 ; x 2 ; x 3 f g , in relation to the isomorphism.The estimates x 1 and x 2 encode the location of the proximal and distal boundaries within the feature space, respectively.The estimate x 3 encodes the sensory input, which in our case is the vibration amplitude in each mode.Once the nervous system has learned the isomorphic mapping, the trilateral computation (Eqs.2-5) can be used to derive an estimate of the tool-centered location of touch (Fig. 2B).To concretely demonstrate this possibility, we implemented this isomorphic mapping in a simple neural network.

Neural network implementation for trilateration on a tool
Somatosensory regions are characterized by spatial maps of the surface of individual body parts (Penfield and Boldrey, 1937).Based on this notion, the above formulation of trilateration to tactile localization on the body surface was implemented in a biologically inspired two-layer feedforward neural network (Miller et al., 2022).The first layer consisted of units that were broadly tuned to touch location in skin-based coordinates, as is thought to be encoded by primary somatosensory cortex (S1).The second layer consisted of units whose tuning was characterized by distance-dependent gradients (either in peak firing rate and/or tuning width) that were anchored to one of the joints.They therefore embodied the distance computation as specified in Equations 2, 3.A Bayesian decoder demonstrated that the behavior of this network matched what would be expected by optimal trilateration (Equations 2-5), displaying distancedependent noise and an inverted U-shaped variability following integration.
While this network relies on the observation that individual primary somatosensory neurons are typically tuned to individual regions of the skin (Delhaye et al., 2018), can it also be re-used for performing trilateration in vibration space?The vibratory motifs are unlikely to be spatially organized across the cortical surface.Instead, the nervous system must internalize the isomorphic mapping between the motifs and the physical space of the tool (Fig. 2).Disrupting the expected vibrations disrupts localization (Miller et al., 2018), suggesting that the user has internal models of rod dynamics (Imamizu et al., 2000).We assume that this internal model is implemented in units that are tuned to the statistics of the vibratory motifs.
We implemented the trilateral computation (Eqs.2-5) in a three-layer neural network with four processing stages (Fig. 3).First, the amplitudes of each mode are estimated by a population of units with subpopulations tuned to each resonant mode (layer 1).Second, activation in each subpopulation is integrated by units tuned to the multidimensional statistics of the motifs (layer 2).This layer effectively forms the internal model of the feature space that is isomorphic to the rod's physical space.Next, this activation pattern is transformed into tool-centered coordinates (Eqs.2, 3) via two decoding subpopulations whose units are tuned to distance from the boundaries of the feature space (Eq.3; layer 3).The population activity of each decoding subpopulations reflects the likelihoods in Equation 4 (Jazayeri and Movshon, 2006).Lastly, the final tool-centered location estimate is derived by a Bayesian decoder (Ma et al., 2006) that integrates the activity of both subpopulations (Eq.5).
The feature space of vibrations is multidimensional, composed of a theoretically infinite number of modes.However, only the first five modes (Fig. 2A) are typically within the bandwidth of mechanoreceptors (i.e., ;10-1000 Hz; Johansson and Flanagan, 2009).The first layer of our network was therefore composed of units tuned to the amplitudes of these modes (Fig. 3A, bottom).This layer was composed of five subpopulations that each encode an estimate of the amplitude of a specific mode.These units were broadly-tuned with Gaussian (bell-shaped) tuning curves f M of the following form: where k is the peak firing rate (i.e., gain), m is the tuning center related to the amplitude of the specific mode, u is the mode amplitude of the stimulus, and s 2 is the variance of the tuning curve.We modelled the response properties of these units for a given contact location on the rod with likelihood functions p r M i ju À Á denoting the probability that mode amplitude u caused r E i spikes in encoding unit i.The likelihood function p r M i ju À Á was modeled as a Poisson probability distribution with a Fano factor of one according to the following equation: where f M i is the tuning curve of unit i.The population response of the encoding units is denoted by a vector r M r M 1 ; ::: , where r M i is the spike count of unit i.The amplitude u of each mode is tied directly to the stimulus location L (Miller et al., 2018).The function of the next layer is to integrate the estimated amplitudes of each mode, encoded in r M ; into a representation of the feature space that can be directly linked to L. It does so via units with bell-shaped tuning curves f S over the feature space (Fig. 3A, middle).The population activity r S of this layer is  2A), which are location-dependent; (middle panel).Feature layer takes input from the mode layer and encodes the feature space (Fig. 2B), which forms the isomorphism with the physical space of the tool; (upper panel) the Distance layer is composed of two subpopulations of neurons with distance-dependent gradients in tuning properties (shown: firing rate and tuning width).The distance of a tuning curve from its "anchor" is coded by the luminance, with darker colors corresponding to neurons that are closer to the spatial boundary.B, Activations for each layer of the network averaged over 5000 simulations when touch was at 0.75 (space between 0 and 1).Each dot corresponds to a unit of the neural network.a combination of (1) the synaptic input W S Á r M , where "Á" is the dot product and W S is the matrix of all synaptic weights; and (2) the uninherited Poisson noise in the unit's spiking behavior (Eq.7).Each unit i in the second layer was fully connected to each unit in the first layer via a vector of synaptic weights w S i , which was set to be proportional to r M for each touch location L. For simplicity, the input into the second layer (f S j ð ÞÞ corresponded to the winner-take-all of the synaptic input (j ¼ argmaxðW S Á r M Þ. The function of the third layer was to estimate the location of L in tool-centered coordinates given the population response r S in the feature space layer.We implemented this computation in two independent decoding subpopulations, each of which was "anchored" to one of the boundaries of the feature space (Fig. 3A, top).The population activity r D of each subpopulation corresponded to: r D i ¼ w D i Á r s 1e i , where w D i is the vector of synaptic weights connecting unit i to the second layer and e i is the uninherited Poisson noise in the unit's spiking behavior (Eq.7).Each unit in the decoding layer was fully connected to each unit in the encoding layer via w D .We used the MATLAB function fmincon to find the positive-valued weight vector that produced the decoding unit's prespecified tuning curve (see below).
As in the previous neural network for body-centered tactile localization (Miller et al., 2022), the distance computation (Eqs.2, 3) was embodied by distance-dependent gradients in the tuning of units f D in each decoding subpopulation.The gain k of these units formed a distance-dependent gradient (close-to-far: high-to-low gain) across the length of the feature space: where k 0 corresponds to the gain of the tuning curve centered on the landmark's location (i.e., distance zero), d is the distance from the center of the tuning curve (d !0) and the landmark, and b is a scaling factor.The width s of each tuning curve can be uniform in either linear or log space.In the latter case, tuning width also forms a distance-dependent gradient (close-to-far: narrow-to-wide tuning) in linear space (Nieder and Miller, 2003), consistent with the Weber-Fechner Law: where s 0 corresponds to the width of the tuning curve centered on the landmark's location, d is the distance from the center of the tuning curve and the landmark (d !0), and g is a scaling factor.It is important to note that these units f D are tuned to the feature space, not the vibrations themselves (as in the encoding layer).Given the isomorphism, we can therefore link their response properties directly to the location of touch L.
When neuronal noise is Poisson-like (as in Eq. 7), the gain of a neural population response reflects the precision (i.e., inverse variance) of its estimate (Ma et al., 2006).Therefore, given the aforementioned distance-dependent gradient in gain, noise in each subpopulation's location estimate (that is, its uncertainty) will increase as a function of distance from a landmark (i.e., the handle or tip).Consistent with several studies (Jazayeri and Movshon, 2006;Ma et al., 2006), we assume that the population responses encode log probabilities.We can therefore decode a maximum likelihood estimates of each subpopulation as follows: where h D is a kernel and r D is the subpopulation response.When neural responses are characterized by independent Poisson noise (Eq.7), h D is equivalent to the log of each subpopulation's tuning curve f D at value L (Jazayeri and Movshon, 2006;Ma et al., 2006).Assuming that the population response reflects log probabilities, optimally integrating both estimates (Eq.5) amounts to simply summing the activity of each subpopulation.
where the optimal estimate LINT on a given trial n can be written as the location for which the log-likelihood of the summed population responses is maximal: The above neural network, with a different encoding layer, implements trilateration for localizing touch in body-centered coordinates.Our present neural network (Equations 6-12) generalizes the Bayesian formulation of trilateration (Equations 2-5) to localizing touch on a tool, using a vibratory feature space as a proxy for tool-centered space.The flow of activity in this network can be visualized at Figure 3B, where the touch occurs at 75% the surface of the tool.
To systematically investigate the behavior of this network, we simulated 5000 instances of touch at a wide range of locations (10% to 90% of the space) on the tool surface using the above network.The input into the neural network were the mode amplitudes h for the corresponding location L. For simplicity we did not model the actual process of mode decomposition from the spiking behavior of mechanoreceptors (Miller et al., 2018), but we did assume that the process is affected by sensory noise (Faisal et al., 2008).Therefore, for each simulation, the input (h½LÞ was corrupted by Gaussian noise with a standard deviation of 0.5 (units: % of space).The values for the above parameters in all layers can be seen in Table 1.All units of each layer shared the same parameter values.We used a maximum log-likelihood decoder to localize touch from the overall response of each subpopulation separately or integrated.

Participants
Forty right-handed participants (24 females, 23.7 6 2.5 years of age) in total completed our behavioral experiments.
Two participants were removed because of inability to follow task instructions, leaving thirty-eight in total to be analyzed.All participants had normal or correctedto-normal vision and no history of neurologic impairment.Every participant gave informed consent before the experiment.The study was approved by the ethics committee (CPP SUD EST IV, Lyon, France).

Experimental procedure
During the task, participants were seated comfortably in a cushioned chair with their torso aligned with the edge of a table and their right elbow placed in a padded arm rest.The entire arm was hidden from view with a long occluding board.A 60-cm-long rod (handle length: 12 cm; cross-sectional radius: 0.75 cm) was placed in their right hand.This rod was either wooden (25 participants) or PVC (thirteen participants).The arm was placed at a height necessary for a 1-cm separation between the object (see below) and the rod at a posture parallel with the table.On the surface of the table, an LCD screen (70 -Â 30 cm) lay backside down in the length-wise orientation; the edge of the LCD screen was 5 cm from the table's edge.The center of the screen was aligned with the participant's midline.
The task of participants was to localize touches resulting from active contact between the rod and an object (foampadded wooden block).In an experimental session, participants completed two tasks with distinct reporting methods (order counterbalanced across participants).In the imagebased task, participants used a cursor to indicate the corresponding location of touch on a downsized drawing of a rod (20 cm in length; handle to tip); the purpose of using a downsized drawing was to dissociate it from the external space occupied by the real rod.The drawing began 15 cm from the edge of the table, was raised 5 cm above the table surface, and was oriented in parallel with the real rod.The red cursor (circle, 0.2-cm radius) was constrained to move in the center of the screen occupied by the drawing.In the space-based task, participants used a cursor to indicate the corresponding location of touch within an empty LCD screen (white background).The cursor was constrained to move along the vertical bisection of the screen and could be moved across the entire length of the screen.It is critical to note that in this task, participants were forced to rely on somatosensory information about tool length and position as no other sensory cues were available to do so.
The trial structure for each task was as follows.In the "precontact phase," participants sat facing the computer screen with their left hand on a trackball.A red cursor was placed at a random location within the vertical bisection of the screen.A "go" cue (brief tap on the right shoulder) indicated that they should actively strike the object with the rod.In the "localization phase," participants made their task-relevant judgment with the cursor, controlled by the trackball.Participants never received feedback about their performance.To minimize auditory cues during the task, pink noise was played continuously over noise-cancelling headphones.
The object was placed at one of six locations, ranging from 10 cm from the handle to the tip (10-60 cm from the hand; steps of 10 cm).The number of object locations was unknown to participants.In each task, there were ten trials per touch location, making 60 trials per task and 120 trials in total.The specific location for each trial was chosen pseudo-randomly.The entire experimental session took ;45 min.
The experiment started with a 5-min sensorimotor familiarization session.Participants were told to explore, at their own pace, how the tool felt to contact the object at different locations.They were instructed to pay attention to how the vibrations varied with impact location.Visual and auditory feedback of the tool and tool-object contact was prevented with a blindfold and pink noise, respectively.Participants were, however, allowed to hold the object in place with their left hand while contacting it with the tool but were not allowed to haptically explore the rod.
At the end of the space-based task, participants used the cursor to report where they felt the tip of the rod (aligned in-parallel to the screen).The judged location of the tip (mean: 56.5 cm; SEM: 1.62 cm) was very similar to the rod's actual length (i.e., 60 cm).It is critical to reiterate here that participants had never seen the rod prior up to this point of the experiment, and likely relied on somatosensory feedback about its dimensions.

Data analysis Regression analysis
Before analysis, all judgments in the image-based task were converted from pixels of drawing space to percentage of tool space.All judgments in the space-based task were normalized such that their estimated tip location corresponded to 100% of tool space.We then used least-squares linear regression to analyze the localization accuracy.The mean localization judgment for each touch location was modelled as a function of actual object location.Accuracy was assessed by comparing the group-level confidence intervals around the slope and intercept.

Trilateration model
Our model of trilateration in the somatosensory system assumes that the perceived location of touch is a consequence of the optimal integration of two independent location estimates, L1 and L2 .This is exemplified in our formulation of trilateration (Equations 1-5).Trilateration predicts that noise in each estimate varies linearly as a function of the distance of touch from two landmarks (Eq.2; Fig. 1B), corresponding to the handle and tip.For any location of touch L along a tactile surface, the variance in each landmark-specific location estimate L can therefore be written as follows: in which « is a landmark-specific intercept term that likely corresponds to uncertainty in the location of each landmark, d is the distance of touch location L from the landmark (Eqs.2, 3), and ŝ is the magnitude of noise per unit of distance.We assume that the noise term ŝ corresponds to a general property of the underlying neural network and therefore model it as the same value for each landmark.The distance-dependent noise for the integrated estimate is therefore: The three parameters in the model ( ŝ , « 1 , and « 2 ) are properties of the underlying neural processes that implement trilateration and are therefore not directly observable.They must therefore be inferred using a reverse engineering approach, where they serve as free parameters that are fit to each participant's variable errors.We simultaneously fit the three free parameters to the data using nonlinear least squares regression.Optimal parameter values were obtained through maximum likelihood estimation using the MATLAB routine fmincon.All modeling was done with the combined data from both localization tasks.R 2 values for each participant in each experiment were taken as a measure of the goodnessof-fit between the observed and predicted pattern of location-dependent noise.

Boundary truncation model
Boundary truncation provides one alternative model to trilateration This model assumes that the estimate of location L corresponds to a Gaussian likelihood whose variance is identical at all points on the rod.The inverted U-shaped variability arises because these likelihoods are truncated by a boundary, either by the range of possible responses or by a categorical boundary (e.g., between handle and tip).As in Equation 1, we can model each likelihood pð LjLÞ as a normal distribution Nðm L ; s L Þ where m L is the location of touch L and s L is the standard deviation.The posterior estimate pðLj LÞ then corresponds to a likelihood truncated at g 1 and g 2 , where g 2 .g 1 .Doing so will distort the mean and variance of the posterior estimate.
We fit this truncation model to the participant-level variable errors in each of our experiments.The standard deviation for each location, s T ðLÞ, was determined by truncating a normal distribution at g 1 and g 2 using the makedist and truncate functions in MATLAB.The model therefore had three free parameters, s T , g 1 and g 2 .The value of s T was constrained between 1 and 40; g 1 between À30 and 30; and g 2 between 70 and 130 (units: % of rod surface).These ranges, particularly for g 1 and g 2 , are quite unrealistic but were chosen to maximize a good fit with the variable errors.Fitting procedures for this model were the same as the trilateration model.

Model comparisons
We used the Bayesian Information Criterion (BIC) to compare the boundary and trilateration models.The difference in the BIC (DBIC) was used to determine a significant difference in fit.Consistent with convention, the chosen cutoff for moderate evidence was a DBIC of 2 and the cutoff for strong evidence was a DBIC of 6.

Data and code availability
The neural network and anonymized behavioral data have been deposited in the Open Science Framework (DOI 10.17605/OSF.IO/ERBGW).

Accurate localization of touch on a tool
In the current experiment (n ¼ 38), we investigated whether tactile localization on a 60 cm hand-held rod is characterized by the U-shaped pattern of variability (Fig. 1B) that is characteristic of trilateration when localizing touch on the body.In two tasks, we measured participants' ability to localize an object that was actively contacted with a hand-held tool.In the image-based task, participants indicated the point of touch on a downsized drawing of the tool.In the space-based task, participants indicated the point of touch in external space.The latter task ensured that localization was not truncated by boundaries in the range of possible responses.
Consistent with prior results (Miller et al., 2018), we found that participants were generally quite accurate at localizing touch on the tool.Linear regressions (Fig. 4A) comparing perceived and actual hit location found slopes near unity both the image-based task (mean slope: 0.93, 95% CI [0.88, 0.99]) and the space-based task (mean slope: 0.89, 95% CI [0.82, 0.95]).Analysis of the variable errors (2 Â 6 repeated measures ANOVA) found a significant main effect of hit location (F (5,185) ¼ 36.1, p , 0.001) but no main effect of task (F (1,37) ¼ 0.39, p ¼ 0.54) or an interaction (F (5,185) ¼ 0.21, p ¼ 0.96).Crucially, the pattern of variable errors (Fig. 4B) in both tasks displayed the hypothesized inverted U-shape, which was of similar magnitude to what has been observed for touch on the arm (Cholewiak and Collins, 2003;Miller et al., 2022).

Computational modeling of behavior
We next used computational modeling to confirm that the observed pattern of variable errors was indeed because of trilateration.We fit each participant's variable errors with a probabilistic model of optimal trilateration (Fig. 1A,B) that was derived from its theoretical formulation (see Materials and Methods).We compared the trilateration model to an alternative hypothesis: the inverted U-shaped pattern is because of truncation at the boundaries of localization (Petzschner et al., 2015), which cuts off the range of possible responses and thus produces lower variability at these boundaries.We fit a boundary truncation model to directly compare to our trilateration model.Given the lack of a main effect of task and to increase statistical power, we collapsed across both tasks sin this analysis.
Our computational model of trilateration provided a good fit to the variable errors observed during tactile localization on a tool.This was evident at the group-level, where the magnitude of variable errors was similar to what has been found when localizing touch on the arm (Fig. 5A).We further observed a high coefficient of determination at the level of individual participants (median R 2 : 0.75; range: 0.29-0.95);indeed, 30 out of 38 participants had an R 2 .0.6.The fits of the trilateration model to the data of six randomly chosen participants can be seen in Figure 5B.The fits of the trilateration model each participant's behavior can be seen in Extended Data Figures 5-1 and 5-2.In contrast, the R 2 of the boundary truncation model was substantially lower than the trilateration model (median: 0.30; range: À0.19-0.71),never showing a better fit to the data in any participant (Fig. 6A).
We next compared each model directly using the Bayesian Information Criteria (BIC).The BIC score for the trilateration model was lower in all 38 participants (mean 6 SD; trilateration: 11.88 6 5.88; truncation: 18.74 6 4.70).Statistically, 33 participants showed moderate evidence (DBIC .2) and 20 participants showed strong evidence (DBIC .6) in favor of trilateration (Fig. 6B).In total, our results strongly suggest that, as with the body, localizing touch on a tool is consistent with a computation via trilateration.

Neural network simulations
Finally, we simulated trilateration on a tool using a biologically inspired neural network with a similar architecture as we have done previously.The goal of these simulations was to concretely demonstrate that the feature space of vibratory motifs could stand in for the physical space of the rod.Our neural network thus took the mode amplitudes as input and trilaterated the resulting touch location in toolcentered coordinates (5000 simulations per location).
Both the mode and feature space layers of the neural network (Fig. 3, bottom and middle) produced unbiased sensory estimates with minimal uncertainty (Extended Data Fig. 7-1).Crucially, both subpopulations in the distance-computing layer (layer 3; Fig. 3, top) were able to localize touch with minimal constant error (Fig. 7A, upper panel), demonstrating that each could produce unbiased estimates of location from the sensory input.However, as predicted given the gradient in tuning parameters, the noise in their estimates rapidly increased as a function of distance from each landmark (Fig. 7B, upper panel), forming an X-shaped pattern across the surface of the tool.
We next examined the output of the Bayesian decoder from Equations 11, 12 (Fig. 7, lower panel).As expected, we observed the computational signature of trilateration.Integrating both estimates resulted in an inverted U-shaped

Discussion
If tools are embodied by the sensorimotor system, we would expect that the brain repurposes its body-based sensorimotor computations to perform similar tasks with tools.Using tactile localization as our case study, we uncovered multiple pieces of evidence that are consistent with this embodied view.First, as is the case for body parts, we observed that localizing touch on the surface of a tool is characterized by perceptual anchors at the handle and  tip (de Vignemont et al., 2009).Second, computational modeling of behavioral responses suggests that they are the result of the probabilistic computation involving trilateration.Indeed, perceptual anchors are a computational signature of trilateration.Finally, using a simple three-layer population-based neural network, we demonstrated the possibility of trilateration in the vibratory feature space evoked by touches on tools.This neural network transformed a vibration-based input into a spatial code, reproducing perceptual anchors on the tool surface.These findings go well beyond prior research on embodiment (Martel et al., 2016) by identifying a computation that functionally unifies tools and limbs.Indeed, they suggest that trilateration is a spatial computation employed by the somatosensory system to localize touch on body parts and tools alike (Miller et al., 2022).They further have important implications for how trilateration would be repurposed at a neural level for tool-extended sensing.
If trilateration is a fundamental spatial computation used by the somatosensory system, it should be employed to solve the same problem (i.e., localization) regardless of whether the sensory surface is the body or a tool.Previous tactile localization studies have reported increased perceptual precision near the boundaries of the hands (Elithorn et al., 1953;Miller et al., 2022), arm (Cholewiak and Collins, 2003;de Vignemont et al., 2009;Miller et al., 2022), feet (Halnan and Wright, 1960), and abdomen (Cholewiak et al., 2004).These perceptual anchors are a signature of a trilateration computation (Miller et al., 2022).The results of the present study are consistent with the use of trilateration to localize touch on tools as well.
Our findings provide computational evidence that tools are embodied in the sensorimotor system (Martel et al., 2016), an idea that was proposed over a century ago (Head and Holmes, 1911).The close functional link between tools and limbs is not just a superficial resemblance but rather a reflection of the repurposing of neurocomputational resources dedicated to sensing and acting with a limb to that with a tool (Makin et al., 2017).This repurposing may be one reason that tool use leads to measurable changes in body perception and action processes (Cardinali et al., 2009;Canzoneri et al., 2013;Miller et al., 2014Miller et al., , 2019a)).
Whereas the present study focused on simply-shaped tools (i.e., straight rods), tactile localization is also possible on more complexly-shaped tools (Yamamoto et al., 2005).We propose that trilateration also underlies tactile localization on these tools.We leveraged our trilateration model to simulate patterns of tactile localization on rods with different numbers of segments (Fig. 8).For multisegmented limbs (e.g., the arm), trilateration occurs locally within each segment (Cholewiak and Collins, 2003;Miller et al., 2022).That is, the signature inverted U-shaped pattern of variability is observed within each segment (e.g., upper and lower arms).Our simulations suggested that the same would be true for multisegmented tools (Fig. 8B).We predict that tactile localization within each segment of a rod would be characterized by the signature pattern of variability indicative of trilateration.
Although trilateration was repurposed for localizing touch on a rod, we observed a noticeable difference in the overall shape of variable error between localizing touch on a rod versus limb (e.g., the arm; Fig. 5A).Whereas localization uncertainty (i.e., variable error) is typically symmetric about the center of a limb (Miller et al., 2022), uncertainty was asymmetric for the rod.Specifically, variable errors were lower near the handle than the tip, peaking away from the center of the rod and toward the tip.These patterns of variable error were also visible in the behavior of individual participants (Extended Data Figs.5-1 and 5-2) and are a direct consequence of differences in the baseline uncertainty of each distance estimate (Eq.13), as demonstrated by simulations in Miller et al. (2022).
There are at least two potential sources for these differences in baseline uncertainty.First, striking the rod near the tip may produce less consistent sensory information (i.e., vibrations), translating into greater sensory uncertainty of where the rod is touched.However, this explanation is unlikely since the hypothesized differences in sensory consistency were not observed in a previous study that characterized a rod's vibratory motifs (Miller et al., 2018).Instead, the source of this difference may lie in the uncertainty of where each boundary is perceived in space via proprioceptive feedback (Eq.3).The location of the handle is well-defined, as it corresponds to the position of the hand.The location of the tip is less well defined, as it must be extracted indirectly from proprioceptive feedback from the forelimb (Debats et al., 2012).This likely corresponds to higher estimation uncertainty of its position in space, contributing to greater baseline uncertainty of the tip-based distance estimate (Eq.13).Future studies should attempt to adjudicate between these two hypotheses.
Another important difference between limbs and tools is the sensory input used to derive localization estimates.While the skin is innervated with sensory receptors, the somatosensory system must "tune into" a tool's mechanical response to extract meaningful information from it.It was previously proposed that where a rod is touched is encoded by the amplitudes of its resonant responses when contacting an object (Miller et al., 2018(Miller et al., , 2019b)).These resonant modes form a feature space that is isomorphic with the physical space of the tool.At a peripheral level, these resonances are re-encoded by the spiking patterns of tactile mechanoreceptors (Johnson, 2001).Therefore, unlike for touch on the body, localizing touch on a tool requires the somatosensory system to perform a temporal-to-spatial transformation.
We used neural network simulations to embody the necessary transformations to implement trilateration on a tool.Our neural network assumes that the human brain contains neural populations that encode for the full feature space of rod vibration.While very little is known about how these types of naturalistic vibrations are represented by the somatosensory system, our modeling results and prior research (Miller et al., 2018(Miller et al., , 2019a) ) suggest that there are neural populations that encode their properties.Previous work demonstrated that individual neurons in primary somatosensory cortex multiplex both amplitude and frequency in their firing properties (Harvey et al., 2013).Recent evidence further suggests that human S1 is tuned to individual vibration frequencies (Wang and Yau, 2021).Our neural network modeling assumes that there are also neurons tuned to the amplitude of specific frequencies, though direct empirical evidence for this tuning is currently lacking.The existence of this coding would be consistent with the finding that S1 performs the initial stages of localization on a rod (Miller et al., 2019a).Furthermore, resonant amplitudes are crucial pieces of information in the natural statistics of vibrations, making it plausible that they are encoded at some stage of processing.Our results therefore open up a new avenue for neurophysiological investigations into how naturalistic vibrations are encoded by the somatosensory system.
The present study demonstrates the biological possibility that the resonant feature space can stand in for the physical space of the tool, allowing for trilateration to be performed to localize touch in tool-centered coordinates.It is interesting to note that the present neural network had a similar structure to one we previously demonstrated could perform trilateration on the body surface.The biggest difference is the input layer, which must first encode the vibration information.However, once this is transformed into the representation of the feature space, the computation proceeds as it would for the body.Note that this does not necessitate that the same neural populations localize touch on limbs and tools (Schone et al., 2021), but only that the same computation is performed when localizing touch on both surfaces.Our network therefore provides a concrete demonstration of what it means to repurpose a body-based computation to localize touch on a tool.The repurposing of the neural network architecture for trilateration explains tool embodiment and the emergence of a shared spatial code between tools and skin.

Figure 3 .
Figure 3. Neural network implementation of trilateration.A, Neural network implementation of trilateration: (lower panel) the Mode layer is composed of subpopulations (two shown here) sensitive to the weight of individual modes (Fig.2A), which are location-dependent; (middle panel).Feature layer takes input from the mode layer and encodes the feature space (Fig.2B), which forms the isomorphism with the physical space of the tool; (upper panel) the Distance layer is composed of two subpopulations of neurons with distance-dependent gradients in tuning properties (shown: firing rate and tuning width).The distance of a tuning curve from its "anchor" is coded by the luminance, with darker colors corresponding to neurons that are closer to the spatial boundary.B, Activations for each layer of the network averaged over 5000 simulations when touch was at 0.75 (space between 0 and 1).Each dot corresponds to a unit of the neural network.(lower panel) mode layer, with three of five subpopulations shown; (middle panel) feature layer; (upper panel) distance layer of localization for each decoding subpopulation.
Figure 3. Neural network implementation of trilateration.A, Neural network implementation of trilateration: (lower panel) the Mode layer is composed of subpopulations (two shown here) sensitive to the weight of individual modes (Fig.2A), which are location-dependent; (middle panel).Feature layer takes input from the mode layer and encodes the feature space (Fig.2B), which forms the isomorphism with the physical space of the tool; (upper panel) the Distance layer is composed of two subpopulations of neurons with distance-dependent gradients in tuning properties (shown: firing rate and tuning width).The distance of a tuning curve from its "anchor" is coded by the luminance, with darker colors corresponding to neurons that are closer to the spatial boundary.B, Activations for each layer of the network averaged over 5000 simulations when touch was at 0.75 (space between 0 and 1).Each dot corresponds to a unit of the neural network.(lower panel) mode layer, with three of five subpopulations shown; (middle panel) feature layer; (upper panel) distance layer of localization for each decoding subpopulation.

Figure 4 .Figure 5 .
Figure 4. Localization and variable error for both tasks.A, Regressions fit to the localization judgments for both the image-based (blue) and space-based (orange) tasks.Error bars correspond to the group-level 95% confidence interval.B, Group-level variable errors for both tasks.Error bars correspond to the group-level 95% confidence interval.

Figure 6 .
Figure 6.Trilateration provides a better fit to the data than boundary truncation.A, Participant-level goodness of fits (R 2 ) for the trilateration model (left, purple) and the boundary truncation model (right, green).For each participant, trilateration was a better fit to the data.B, Histogram of the DBIC values used to adjudicate between the two models, color-coded by the strength of the evidence in favor of trilateration.Purple corresponds to substantial evidence in favor of trilateration; pink corresponds to moderate evidence in favor of trilateration; gray corresponds to weak/equivocal evidence in favor of trilateration.Note that in no case did the boundary truncation model provide a better fit to the localization data (i.e., DBIC , 0).

Figure 7 .
Figure 7. Neural network simulations.A, Localization accuracy for the estimates of each decoding subpopulation (upper panel; L 1 , blue; L 2 , red) and after integration by the Bayesian decoder (lower panel; L INT , purple).B, Decoding noise for each decoding subpopulation (upper panel) increased as a function of distance from each landmark.Note that distance estimates are made from the 10% and 90% locations for the first (blue) and second (red) decoding subpopulations, respectively.Integration via the Bayesian decoder (lower panel) led to an inverted U-shaped pattern across the surface.Note the differences in the y-axis range for both panels.The results of decoding for the mode and feature space layers of the network can be seen in Extended Data Figure 7-1.

Figure 8 .
Figure 8. Simulations of multisegmented rods.We simulated how trilateration operates within rods with different numbers of segments.Here, we show the predicted patterns of variability for (A) a single-segment rod (used in present study) and (B) two-segment (left) and three-segment (right) rods.The magnitude of variable error is color-coded as red-to-blue (low-tohigh).The inverted U-shaped pattern of variability was observed in each segment.

Table 1 :
Neural network parameter values