Abstract
Previous research has extensively evaluated the impact of delay on the value of positive reinforcers, but the study of its impact on the value of aversive consequences is scarce. The present study employed a modification of Evenden and Ryan’s procedure (1996, Psychopharmacology, 128(2), 161–170) to obtain data on temporal discounting of an aversive consequence, with rats as experimental subjects. In the first phase of the procedure, rats chose between one-pellet and four-pellet alternatives; when subjects developed preference for the larger-amount alternative, a shock was added to it, resulting in a loss of preference. In the first experimental condition, the delay to shock was progressively increased within each session from zero to 40 s (ascending delays), which resulted in a recovery of the preference for the larger-amount + shock alternative as the delay to shock was increased. In a subsequent condition (descending delays) the delay to shock was progressively decreased within each session, from 40 to 0 s. In both conditions, the preference for the smaller-amount no-shock alternative was well described by a hyperbolic function. The order of presentation of the delays within the session, ascending or descending, did not alter the relationship between preference and delay to shock. The temporal discounting curve obtained in the present study could represent a baseline for analyzing the impact that diverse environmental and pharmacological variables have on the temporal discounting of aversive consequences.
Similar content being viewed by others
When making a choice between two alternatives that differ in both amount of reinforcement and delay to its delivery, human and nonhuman organisms tend to prefer the one with the shortest delay, even when doing so prevents the maximization of the amount of reinforcement obtained in the long term (Vanderveldt, Oliveira, & Green, 2016). This preference is a result of temporal discounting (i.e., the decrement of the value of an event that occurs as the delay to its delivery is increased; Mazur, 1987). Research on this topic has attracted much attention of disciplines of psychology and neuroscience because of its replicability across species and reinforcers (Green & Myerson, 2004; Green, Myerson, Holt, Slevin, & Estle, 2004; Oberlin & Grahame, 2009), and because of the correlations that have been found between the degree of temporal discounting and the probability of suffering different health problems, such as substance abuse disorder (Reynolds, 2006), pathological gambling (Alessi & Petry, 2003), ADHD (Wilson, Mitchell, Musser, Schmitt, & Nigg, 2011), and obesity (Weller, Cook, Avsar, & Cox, 2008), to name just a few.
Another common choice situation occurs when organisms choose between an alternative associated with a small reinforcer and another associated with a combination of a larger reinforcer and a delayed aversive stimulus. This choice structure is a model of many of the choices associated with risk behavior in humans and may lead to a diversity of health problems that have been linked to impulsivity (Critchfield & Kollins, 2001). The high prevalence of these disorders suggests that the delayed aversive outcome has small impact on preference and highlights the importance of investigating the variables that influence the sensitivity to delayed aversive consequences.
Unfortunately, most studies on temporal discounting have focused on the decrement of the value of positive reinforcers; for example, when studying humans, although the most employed reinforcer has been monetary hypothetical rewards (Green & Myerson, 2004), the temporal discounting of other positive reinforcers has also been investigated (Jimura et al., 2011; MacKillop et al., 2011; Madden, Begotka, Raiff, & Kastern, 2003; Odum & Rainaud, 2003). In research with nonhuman subjects, the reinforcer has been different types of food (Freeman, Green, Myerson, & Woolverton, 2009; Freeman, Nonnemacher, Green, Myerson, & Woolverton, 2012; Green et al., 2004; Pinkston & Lamb, 2011; Reynolds, De Wit, & Richards, 2002) and drugs (Oberlin & Grahame, 2009; Woolverton, Myerson, & Green, 2007).
In contrast to the extensive research with positive reinforcers, the study of the temporal discounting of aversive consequences is rather scarce and has been limited to the study of hypothetical losses in humans (Estle, Green, Myerson, & Holt, 2006; Ostaszewski & Karzel, 2002). Although it is well known that the value of the aversive consequences is sensitive to the effects of delay for both human and nonhuman animals in single-operant schedules (Azrin & Holz, 1966; Banks & Vogel-Sprott, 1965; Baron, 1965), and that the addition of nondelayed aversive consequences diminishes the value of a positively reinforced alternative, the effect of delayed aversive consequences on choice procedures has been rarely investigated (but see Deluty, 1978; Deluty, Whitehouse, Mellitz, & Hineline, 1983). To our knowledge, the only study that has explored the relationship between delay and the value of aversive consequences within a temporal discounting framework was performed by Woolverton, Freeman, Myerson, and Green (2012). In this study, Rhesus monkeys were given a choice between two alternatives that delivered the same amount of reinforcement (cocaine), but differed in the delay to the aversive consequence (an injection of histamine); in one of the alternatives, the injection of histamine was immediate, while in the other it was delayed. Choice of the alternative with no delay decreased as a function of the delay in the other alternative’s injection, and this decrement was well described by a hyperboloid discounting function, similarly to what is observed in temporal discounting with positive reinforcers.
The analysis of the symmetry between the effects that positive and aversive consequences have on behavior, and how aversive and appetitive stimuli are combined, have been central questions in the study of choice behavior in general (Bouzas, 1978; De Villiers, 1977; Deluty et al., 1983; Farley & Fantino, 1978; Rachlin & Herrnstein, 1969). In order of advancing this analysis in the temporal discounting area, it seems necessary to increase research on aversive consequences. Furthermore, considering the strong implications that the insensitivity to delayed aversive outcomes has on diverse applied research areas (Odum, Madden, & Bickel, 2002), the inequality in the attention given by researchers to the temporal discounting of aversive consequences may be understood by a lack of procedures that allow to readily study it.
The goal of this study was to gather evidence of the change in value of an aversive consequence as a function of the delay to its delivery. To accomplish it, we performed a modification of the procedure proposed by Evenden and Ryan (1996), which has been widely used in research on temporal discounting of positive reinforcers (Cardinal, Pennicott, Lakmali, Robbins, & Everitt, 2001; Cardinal, Robbins, & Everitt, 2000; Orsini et al., 2017; Winstanley, Theobald, Cardinal, & Robbins, 2004); in this procedure, animals are faced with a choice between a smaller-amount alternative and a larger-amount alternative, whose delay is increased within sessions. This last characteristic allows to obtain a discount function in a single session.
In the present study, subjects were given a choice between two alternatives associated with different amounts of reinforcement (one vs. four pellets), both delivered immediately. When subjects developed a preference for the larger-amount alternative, an aversive consequence was added to it, whose delay was increased within a session (0, 5, 10, 20, 40 s). We predicted a strong preference for the smaller reinforcer alternative at the beginning of the session, which would decrease as the delay to the aversive consequence associated with the choice of the larger reinforcer alternative was increased. In a subsequent condition, the order of presentation of delays was reversed (40, 20, 10, 5, 0 s), to control for possible short-term habituation to the shock.
Method
Subjects
Subjects were 10 male Wistar rats experimentally naïve, approximately 60 days old, obtained from the vivarium of the Institute of Cellular Physiology, Universidad Nacional Autónoma de México. After habituation to the conditions of the animal housing room, body weights were reduced to 85% by gradually reducing the food intake during 7 days. At the end of the sessions, rats had access to food for maintaining them at approximately 85% of their free-feeding weight, according to graphs obtained from the supplier. Each subject was individually housed, and water was available ad lib in the home cage. The experiment followed the official Mexican norm NOM-062-ZOO-1999 Technical Specification for Production, Use and Care of Laboratory Animals.
Apparatus
Five operant conditioning chambers (MED Associates, Inc., Model ENV 008-VP) served as the experimental spaces. Each operant chamber measured 30.5 cm (long) × 24.1 cm (wide) × 21.0 cm (tall), and was enclosed in a sound-attenuating cubicle (MED Associates, Inc., Model ENV-022 M). The floor was a stainless steel grid comprised of nineteen 0.48 cm diameter bars (MED Associates, Inc., Model ENV-005), which were connected to an Aversive stimulator (MED Associates, Inc., Model ENV-414S) that delivered scrambled aversive stimulation within the range 0–5 mA. Each chamber had two retractable response levers (MED Associates, Inc., Model ENV-112CM) located 10.5 cm. above the floor, in the front wall. Each lever was 4.8 cm. wide. The visual stimuli used were a 28-V, 100-mA house light (MED Associates, Inc., Model ENV-215 M), situated 1.3 cm below the top of the chamber, at the center of the back wall, and two triple stimulus displays, each of them situated 1.5 cm above each lever. Each triple stimulus display consisted of a bar of acrylic mounted on an aluminum bar with three apertures of 1 cm of diameter and separated by .6 cm, and it could project (from left to right) red, white, and blue light via ultrabrilliant LEDs. A 5.1 cm × 5.1 cm pellet receptacle (MED Associates, Inc., Model ENV-200R2M) was located in the center of the front wall, 2.5 cm above the floor, and received, according to the schedule, 45 mg food pellets (Bio-Serv, Product F0165) from a circular modular pellet dispenser (MED Associates, Inc., Model ENV-203 M). The presentation of stimuli and the collection of data were controlled by personal computers using the Medstate programming language (Med-PC-IV, MED Associates, Inc.).
Procedure
Habituation, magazine training, and lever response training.
When subjects were at 85% of their ad-lib body weight, they were habituated to the operant box over a 30-min session, during which there were 25 pellets available in the magazine. Habituation was considered finished when the subject ate all the pellets in a session. During the next two sessions, the white lights above both levers were turned on, the levers were extended, and a pellet was dispensed every 45 s, or whenever the rat pressed any lever. These sessions ended after 60 reinforcers were obtained, or 40-min elapsed, whichever came first. If subjects obtained at least 50 reinforcers by lever pressing, the next day only one lever was presented (counterbalanced across subjects); if subjects obtained 80 reinforcers, this procedure was replicated in the other lever. When subjects obtained 80 reinforcers in two consecutive sessions, shaping was finished, and the pretraining began next session.
Pretraining: Basic procedure and assessment of preference for the large amount alternative.
Sessions were divided in five identical blocks; in each of them, two forced-choice trials were followed by six free-choice trials. Session started with the chamber in darkness, and after 40 s elapsed, the houselight and the light over the feeder were turned on. If the subject made a nosepoke response in the feeder within 10 s of the light presentation, one (forced-choice trials) or two (free-choice trials) levers were extended, and both the houselight and the feeder light were turned off. The first lever press within 10 s after lever (s) presentation delivered the reinforcer. One of the levers was associated with the delivery of one pellet, while the other was associated with the delivery of four pellets, with their position counterbalanced across subjects (see Fig. 1a). Either consequence was delivered immediately (.05 s). During the two forced-choice trials, which were presented at the beginning of each block, each lever was presented once in random order. If the subject did not nose poke within 10 s after the light over the feeder was presented, or did not press the lever within 10 s since its presentation, the trial was marked as an omission and the trial ended. After the trial ended, an intertrial interval (ITI) began, whose duration was adjusted so that there were 100 s since the beginning of the current trial until the beginning of the next one. During ITI, the chamber was in darkness. This phase ended when the proportion of choice for the larger-amount alternative was higher than .85 during three consecutive sessions. When this criterion was met, the position of the alternatives was reverted, in order to discard a potential position bias. When preference for the larger-amount alternative was again developed, subjects entered the next pretraining phase.
Pretraining: Introduction of shock and adjustment of its intensity.
In this phase, the only difference with the procedure described above was that a .5 mA 1 s-duration shock was added to the larger-amount alternative (see Fig. 1b). It was expected that with the addition of this stimulus the preference for the larger-amount alternative decreased. When the proportion of preference for the larger-amount alternative was lower than .35 during three consecutive sessions, it was assumed that the shock was aversive and this phase ended. However, there were other possibilities: either the shock intensity was too high and subjects stopped responding in all alternatives, or it was too weak and the preference for the larger-amount alternative did not decrease. In both cases, shock intensity was adjusted, either decreasing or increasing it in .1 mA steps, until preference was in the desired range.
Temporal discounting of shock: Ascending delays.
In this phase, the delay to shock presentation was manipulated across blocks within a session; the delay to shock was 0, 5, 10, 20 and 40 s during Blocks 1–5, respectively. During this delay, different discriminative stimuli were presented over the lever associated with the larger-amount alternative (see Fig. 1c). During training, we noticed that some subjects’ preference for the larger-amount alternative increased even when the delay to shock was 0 s, suggesting habituation. We considered that a subject habituated when its proportion of choice for the larger-amount alternative during the 0-s delayed shock was higher than .5 during three consecutive sessions. In this case, we increased the shock intensity .1 mA.
Temporal discounting of shock: Descending delays.
With the goal of discarding within-session habituation to the shock as a confounding variable associated with the discounting curves obtained in the previous phase, the delays were presented in descending order (40, 20, 10, 5, 0 s in Blocks 1–5, respectively) during the next experimental phase (see Fig. 1f).
Data analysis
During the pretraining phases, the proportion of choice for the larger-amount alternative was calculated for each individual during the last three days before meeting the stability criteria (preference for the larger-amount alternative higher than .85 during three consecutive days). During experimental conditions, choice behavior was considered stable for each individual when in the last block of three sessions, the mean choice proportion for the small-amount no-shock alternative for each delay was neither the highest nor the lowest of the last five blocks of three sessions. It was also required that the shock intensity during the last 15 sessions had not been increased or decreased. The proportion of choice for the smaller-amount no-shock alternative was calculated for each delay to shock, and data from the last three sessions were plotted as a function of the delay to the shock associated with the larger-amount alternative. An ANOVA was performed to evaluate the effect of condition (ascending and descending delays) and delays (0, 5, 10, 20, and 40 s). In the experimental conditions, we also calculated two additional indexes of preference: the acceptance latencies and the number of omitted trials for each alternative during the forced-choice trials. ANOVAs were conducted to evaluate the effect of condition (ascending and descending delays), delays (0, 5, 10, 20, and 40 s) and alternative (larger-amount + shock; smaller-amount no-shock). To quantify the sensitivity to delay, the following model was fitted to each individual data during the last block of three sessions:
Where V represents value of the alternative, indicated by the proportion of choice, D represents delay, k represents sensitivity to delay, and M represents the starting point of the curve. The parameters and the variance accounted for, obtained from data from the ascending and descending delays conditions were compared via dependent-samples t tests.
Results
Pretraining
During pretraining, all subjects acquired a preference for the larger-amount alternative and maintained it when its position was reverted; the choice proportion at the end of this phase was .97 ± .01 (mean ± SEM), and at the end of the reversion .96 ± .01. When the shock was introduced as an additional consequence of choosing the larger-amount alternative, all subjects decreased their preference for it. Table 1 shows, for each individual, the proportion of choice (mean ± SEM) for the larger-amount alternative when a shock was not presented (Pretraining 1) and when a shock was presented (Pretraining 2). Also shown is the shock intensity at the end of the pretraining phase and at the end of both temporal discounting conditions.
Temporal discounting
Choice proportions in ascending and descending delays.
Figure 2 shows the group average and the individual proportions of choice for the smaller-amount alternative as a function of the delay to the shock associated with the larger-amount alternative, during the last 3 days of the conditions of ascending delays (filled circles) and descending delays (unfilled circles). These data were attained when each subject met the stability criteria, which was achieved after 46.8 ± 8.09 sessions under the same shock intensity in the ascending delays condition, and 47.9 ± 4.66 sessions in the descending delays condition. In both conditions, all subjects decreased their preference for the smaller-amount no-shock alternative (i.e., increased the preference for the larger-amount + shock alternative) when the aversive stimulus associated with the other alternative was delayed. An ANOVA with delay (0, 5, 10, 20 and 40 s) and condition (ascending, descending) as within-subjects factors, indicated that delay exerted a significant effect, F(4, 36) = 88.71, p < .001, but that the effect of condition and its interaction with delay were not significant, F(1, 9) = 3.00, p = .12, and F(4, 36) = 1.84, p = .14, respectively. Post hoc analysis indicated that there were differences among all delays (all ps < .03), except between 20 and 40 s delays (p = .74). Also, shown in Fig. 2 are the best fitting functions for each individual during each condition. Table 2 shows the parameter k, and the variance accounted for, for each individual during each condition and the group average. Dependent samples t tests showed that there were no differences between conditions in neither, the value of k t(9) = -0.28, p = .78, nor in the variance accounted for, t(9) = 1.83, p = .10.
Latencies
Figure 3 shows the mean and SEM of the latencies for accepting each alternative in the forced-choice trials during the last five sessions of the ascending and descending delays conditions. In both conditions, the latency for accepting the larger-amount + shock alternative was a function of the delay to shock: as delay was increased, the latency diminished. The latency for accepting the smaller-amount no-shock alternative did not vary as a function of the delay to shock. An ANOVA was performed with condition, alternative, and delay as within-subjects factors. It demonstrated a significant effect of alternative, F(1, 8) = 9.16, p = .016, with latencies being higher for accepting the larger-amount + shock alternative than the smaller-amount no-shock alternative. The effect of delay was also significant, F(4, 32) = 10.36, p < .0001, indicating that the latency for accepting both alternatives decreased as the delay to shock increased. This effect was due to the latencies for accepting the larger-amount + shock alternative, as indicated by a significant interaction of alternative with delay, F(4, 32) = 13.97, p < .0001. Post hoc analysis indicated that there were differences in the latency for accepting the larger-amount + shock alternative between the short delay (0 s) and the long delays (10, 20 and 40 s), while the acceptance latencies for the smaller-amount no-shock alternative remained constant. The main effect of condition, as well as all its interactions were nonsignificant (all Fs < .19, all ps > .75).
Omissions
Figure 4 shows the mean and SEM of the percentage of omitted forced-choice trials of both alternatives during the last five sessions of the ascending and descending delays conditions. It can be observed that when the delay to shock was 0, 5 and 10 s, subjects omitted a percentage of forced-choice trials of the larger-amount + shock alternative. In contrast, when the delay to shock was 20 and 40 s, the percentage of omitted trials was zero. In the smaller-amount no-shock alternative, the percentage of omitted trials remained very low in all delays to shock. Because there was no variance at long delays of the larger-amount + shock alternative nor in all delays for the smaller-amount no-shock alternative, an ANOVA explored the effect of condition and delay in the percentage of omitted trials only for delays 0, 5 and 10 s, and only for the larger-amount + shock alternative. The analysis demonstrated a significant effect of delay, F(2, 18) = 12.28, p < .001. Post hoc analysis indicated that the percentage of omitted trials was significantly different between the 0-s and 10-s delays and between the 5-s and 10-s delays (p < .01 and p < .05, respectively). Nonsignificant effects of condition, F(1, 9) = 1.67, p = .23, and its interaction with delay, F(2, 18) = 1.14, p = .34, were found.
Discussion
The purpose of the present experiment was to describe the change in value of an aversive consequence as a function of the delay to its delivery; these data are important to complement research on temporal discounting, which has been performed mainly with positive reinforcers. The main findings of the present experiment were that the impact of the shock on preference decreased as a function of its delay and that subjects’ preferences were well described by a hyperbolic model of delay discounting, suggesting that each individual was sensitive to the delay of shock presentation; other measures related to preference—latency and omissions—were concordant with this interpretation. Although it is well known that aversive consequences lose their impact on behavior when they are delayed (Baron, 1965; Camp, Raymond, & Church, 1967; Deluty, 1978), this is the first study with rats and the second with nonhuman animals that has quantified in a direct way this loss of value and that presents a temporal discounting function for an aversive consequence.
The first phase of the present study involved presenting a choice between two alternatives with different amounts of reinforcement, which could be obtained immediately; when preference for the larger-amount alternative had developed, a shock was added to it for testing its aversive properties. We found a clear preference for the larger-amount alternative, which decreased when the shock was added. This decrement is consistent with several studies in which the addition of an aversive consequence has diminished the value of an otherwise preferred alternative (Azrin, 1960; Baron, 1965; Camp et al., 1967; Deluty, 1976), and validates the use of the shock and its intensity as an aversive stimulus. In order of ensuring that the shock was aversive, it was necessary to adjust its intensity for each subject; this adjustment has been frequently performed in studies involving aversive consequences (Azrin, 1960; Cohen, 1968; Deluty, 1976; Woolverton et al., 2012). The range of intensities employed in the final phases of the experiment was 0.35 mA–1 mA, which is in the range commonly employed in studies of aversive stimulation (Bouzas, 1978; Camp et al., 1967; Cohen, 1968; Simon, Gilbert, Mayse, Bizon, & Setlow, 2009).Footnote 1
When the aversive stimulus was delayed, all subjects showed sensitivity to the delay, recovering the preference for the larger-amount + shock alternative as a function of the delay. In order to present the data in the form of a discounting function, we plotted the proportion of choice for the smaller-amount no-shock alternative, which is the complement of the preference for the larger-amount + shock alternative. The variance accounted for by the hyperbolic model of delay discounting was high, and similar to that found in studies that have assessed the delay discounting of food (Green et al., 2004; Richards, Mitchell, Wit, & Seiden, 1997). To make more comparable our results with aversive consequences to those obtained with positive consequences a common hyperbolic model—instead of the more complex hyperboloid model—was fitted to the data. This model has been shown to be the most parsimonious in research with pigeons and rats (Aparicio, 2015; Green et al., 2004; Vanderveldt et al., 2016), contrary to research with human participants, in which the hyperboloid model has been shown to account for a higher variance (Myerson & Green, 1995). Further research could explore whether explicitly manipulating variables related to parameters of more complex models of temporal discounting can provide a better representation of the data obtained with aversive stimuli. Convergent evidence about the impact of the delay to the shock was obtained from the analysis of latencies and omissions during the forced-choice trials. Both latency and probability of acceptance of the larger-amount + shock alternative were sensitive to the delay of shock presentation. This alternative was accepted faster and with a higher probability as the shock become more delayed. Although these measures were taken from different trials than the proportion of choice (forced-choice vs. free-choice trials), the three measures point in the same direction: As the delay to shock was increased, shock’s effects over preference were weaker. These effects could have been facilitated by an enhanced discriminability of each delay since the beginning of each block, which was made possible by the association of each delay with a different stimulus. This procedural aspect was not present in the original Evenden and Ryan (1996) procedure but has proven to be effective in other procedures in which rapid changes in delay, amount, or frequency of reinforcement occur during a single session (Hanna, Blackman, & Todorov, 1992; Krägeloh & Davison, 2003; Pope, Newland, & Hutsell, 2015; Slezak & Anderson, 2009).
As is frequently found in studies on aversive stimulation (Chen & Amsel, 1982; Church, LoLordo, Overmier, Solomon, & Turner, 1966), in this study some subjects habituated to the shock. An advantage of the present procedure is that the effects of habituation are easy to observe in the proportion of choice for the larger-amount + shock alternative in the block in which the shock is delivered immediately. Our strategy for dealing with this issue was to considerer that a subject habituated when its proportion of choice for the larger-amount + shock alternative was higher than .5 during 3 consecutive days. When this happened, the shock intensity was increased .1 mA. Besides long-term habituation, we were also concerned about possible short-term habituation. Given that subjects received shocks during the session, it was possible that as session progressed, the sensitivity to shock was progressively diminished. If this were true, subjects’ preference for the larger-amount + shock alternative should increase as the session progressed, which was precisely our main result in the ascending-delays condition. In order to distinguish between temporal discounting and short-term habituation as the relevant variable for this pattern of results, in a subsequent condition we presented the delays in a descending order, and found that subjects were equally sensitive to the delay, a result that discarded short-term habituation as an explanation for the discount curve obtained during the first experimental condition. This was statistically confirmed by the factor condition (ascending vs. descending delays) having a nonsignificant effect on all of the variables that were analyzed.
Research on the comparison between the temporal discounting of positive and negative consequences has found interesting differences between the two processes; for example, it has been found that positive reinforcers are discounted more steeply than aversive consequences (Estle et al., 2006; Gonçalves & Silva, 2015). However, the generality of these conclusions is limited because only humans have participated in this research and because the consequences employed have been mainly hypothetical. For these reasons, we considered it important to perform research on temporal discounting of real aversive consequences in order to provide data that allow the above-mentioned comparison. Although the present study did not compare delay discounting functions for positive and negative consequences, the estimated values of k for the group average in ascending (k = 0.13) and descending (k = 0.19) delays were very similar to values reported in other studies with food (Mazur, 2007; Mazur & Biondi, 2009) and water (Richards et al., 1997) reinforcers. These findings suggest that delay discounting functions for positive and negative consequences are similar in rats.
This result has also been found in the study of probabilistic discounting, a research topic closely related to temporal discounting (Green & Myerson, 2004; Rachlin, 2006). A recent research program, using a procedure highly similar to the one employed in the present study, has provided information about probabilistic discounting of aversive consequences with animals and has compared it with the discounting of a positive reinforcer, finding no differences in the degree of discounting for food or for shock (Simon et al., 2009). In this study, the authors manipulated the probability of presentation of a shock with the goal of obtaining a probabilistic discounting functions; subjects faced a choice between a smaller-amount no-shock, and a larger-amount + shock alternatives. The probability of shock increased during the session (0, .25, .5, .75, 1). At the beginning of the session, when shock presentation had a low probability, subjects preferred the larger-amount + shock. However, as shock probability increased, the preference for this alternative diminished, providing a discount curve that reflected the sensitivity to the shock probability. This procedure has been used to study different variables that potentially affect probability discounting, for example, the impact of motivational variables like the amount of positive reinforcement and the shock intensity (Shimp, Mitchell, Beas, Bizon, & Setlow, 2015), the administration of dopaminergic agonists, antagonists, and other drugs (Mitchell et al., 2012; Mitchell, Vokes, Blankenship, Simon, & Setlow, 2011; Simon et al., 2009; Simon et al., 2011), lesions in basolateral amygdala and orbitofrontal-cortex (Orsini, Trotta, Bizon, & Setlow, 2015), and differences between sexes (Orsini, Willis, Gilbert, Bizon, & Setlow, 2016). Our present experiment represents a first step for performing this kind of research in the temporal discounting area.
In summary, the data obtained in both the ascending delay and the descending delay conditions allowed the quantification of the effects of delay of an aversive consequence on its acceptability. The main contributions of the present study are (1) the demonstration that the hyperbolic model of delay discounting fitted the data from experiments using an aversive consequence and (2) the proposal of a procedure that allows to study the temporal discounting of aversive consequences, a topic that is relevant to different research problems in both experimental analysis of behavior and behavioral neurosciences. Besides contributing to the understanding of the different environmental variables that determine choice when aversive consequences are involved, the present procedure could be useful for the research on the neural and pharmacological bases of temporal discounting of aversive consequences. The procedure developed by Evenden and Ryan (1996) has been extremely useful for advancing the comprehension of the variables that determine the temporal discounting of rewards (for review, see Cardinal, 2006; Vanderveldt et al., 2016), and the modification of this procedure proposed here could have a similar influence in the understanding of the variables that control the temporal discounting of aversive consequences.
Notes
Regarding the ethical aspect of the aversive stimulus employed in the present study, it is important to mention that although the intensity was enough for diminishing the preference for the large-amount alternative, it was not too strong as to cause subjects to stop responding altogether; given the characteristics of the procedure, it was possible to avoid all shock presentations by omitting all forced-choice trials associated with the larger-amount + shock alternative, and by choosing the small amount no-shock alternative in the free-choice trials.
References
Alessi, S., & Petry, N. (2003). Pathological gambling severity is associated with impulsivity in a delay discounting procedure. Behavioural Processes, 64(3), 345–354.
Aparicio, C. F. (2015). Comparing models of intertemporal choice: Fitting data from Lewis and Fischer 344 rats. Conductual, 3(2). http://conductual.com/content/comparing-models-intertemporal-choice-fitting-data-lewis-and-fischer-344-rats.
Azrin, N. H. (1960). Effects of punishment intensity during variable‐interval reinforcement. Journal of the Experimental Analysis of Behavior, 3(2), 123–142.
Azrin, N. H., & Holz, W. C. (1966). Punishment. In W. K. Honig (Ed.), Operant behavior: Areas of research and application(pp. 380–447). New York, NY: Appleton-Century-Crofts.
Banks, R., & Vogel-Sprott, M. (1965). Effect of delayed punishment on an immediately rewarded response in humans. Journal of Experimental Psychology, 70(4), 357.
Baron, A. (1965). Delayed punishment of a runway response. Journal of Comparative and Physiological Psychology, 60(1), 131.
Bouzas, A. (1978). The relative law of effect: effects of shock intensity on response strength in multiple schedules1. Journal of the Experimental Analysis of Behavior, 30(3), 307–314.
Camp, D. S., Raymond, G. A., & Church, R. M. (1967). Temporal relationship between response and punishment. Journal of Experimental Psychology, 74(1), 114.
Cardinal, R. N. (2006). Neural systems implicated in delayed and probabilistic reinforcement. Neural Networks, 19(8), 1277–1301.
Cardinal, R. N., Pennicott, D. R., Lakmali, C., Robbins, T. W., & Everitt, B. J. (2001). Impulsive choice induced in rats by lesions of the nucleus accumbens core. Science, 292(5526), 2499–2501.
Cardinal, R. N., Robbins, T. W., & Everitt, B. J. (2000). The effects of d-amphetamine, chlordiazepoxide, α-flupenthixol and behavioural manipulations on choice of signalled and unsignalled delayed reinforcement in rats. Psychopharmacology, 152, 362–375.
Chen, J.-S., & Amsel, A. (1982). Habituation to shock and learned persistence in preweanling, juvenile, and adult rats. Journal of Experimental Psychology: Animal Behavior Processes, 8(2), 113.
Church, R. M., LoLordo, V., Overmier, J. B., Solomon, R. L., & Turner, L. H. (1966). Cardiac responses to shock in curarized dogs: Effects of shock intensity and duration, warning signal, and prior experience with shock. Journal of Comparative and Physiological Psychology, 62(1), 1.
Cohen, P. S. (1968). Punishment: The interactive effects of delay and intensity of shock. Journal of the Experimental Analysis of Behavior, 11(6), 789–799.
Critchfield, T. S., & Kollins, S. H. (2001). Temporal discounting: Basic research and the analysis of socially important behavior. Journal of Applied Behavior Analysis, 34(1), 101–122.
De Villiers, P. (1977). Choice in concurrent schedules and a quantitative formulation of the law of effect. In J. E. R. Staddon & W. K. Honig (Eds.), Handbook of operant behavior (pp. 233–287). New York, NY: Prentice-Hall.
Deluty, M. (1976). Choice and the rate of punishment in concurrent schedules. Journal of the Experimental Analysis of Behavior, 25(1), 75–80.
Deluty, M. (1978). Self-control and impulsiveness involving aversive events. Journal of Experimental Psychology: Animal Behavior Processes, 4(3), 250.
Deluty, M., Whitehouse, W. G., Mellitz, M., & Hineline, P. N. (1983). Self-control and commitment involving aversive events. Behaviour Analysis Letters, 3(4), 213–219.
Estle, S. J., Green, L., Myerson, J., & Holt, D. D. (2006). Differential effects of amount on temporal and probability discounting of gains and losses. Memory & Cognition, 34(4), 914–928.
Evenden, J., & Ryan, C. (1996). The pharmacology of impulsive behaviour in rats: The effects of drugs on response choice with varying delays of reinforcement. Psychopharmacology, 128(2), 161–170.
Farley, J., & Fantino, E. (1978). The symmetrical law of effect and the matching relation in choice behavior. Journal of the Experimental Analysis of Behavior, 29(1), 37–60.
Freeman, K. B., Green, L., Myerson, J., & Woolverton, W. L. (2009). Delay discounting of saccharin in rhesus monkeys. Behavioural Processes, 82(2), 214–218.
Freeman, K. B., Nonnemacher, J. E., Green, L., Myerson, J., & Woolverton, W. L. (2012). Delay discounting in rhesus monkeys: Equivalent discounting of more and less preferred sucrose concentrations. Learning & Behavior, 40(1), 54–60.
Gonçalves, F. L., & Silva, M. T. A. (2015). Comparing individual delay discounting of gains and losses. Psychology & Neuroscience, 8(2), 267.
Green, L., & Myerson, J. (2004). A discounting framework for choice with delayed and probabilistic rewards. Psychological Bulletin, 130(5), 769.
Green, L., Myerson, J., Holt, D. D., Slevin, J. R., & Estle, S. J. (2004). Discounting of delayed food rewards in pigeons and rats: Is there a magnitude effect? Journal of the Experimental Analysis of Behavior, 81(1), 39–50.
Hanna, E. S., Blackman, D. E., & Todorov, J. C. (1992). Stimulus effects on concurrent performance in transition. Journal of the Experimental Analysis of Behavior, 58(2), 335–347.
Jimura, K., Myerson, J., Hilgard, J., Keighley, J., Braver, T. S., & Green, L. (2011). Domain independence and stability in young and older adults’ discounting of delayed rewards. Behavioural Processes, 87(3), 253–259.
Krägeloh, C. U., & Davison, M. (2003). Concurrent‐schedule performance in transition: Changeover delays and signaled reinforcer ratios. Journal of the Experimental Analysis of Behavior, 79(1), 87–109.
MacKillop, J., Amlung, M. T., Few, L. R., Ray, L. A., Sweet, L. H., & Munafò, M. R. (2011). Delayed reward discounting and addictive behavior: A meta-analysis. Psychopharmacology, 216(3), 305–321.
Madden, G. J., Begotka, A. M., Raiff, B. R., & Kastern, L. L. (2003). Delay discounting of real and hypothetical rewards. Experimental and Clinical Psychopharmacology, 11(2), 139.
Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement. In M. L. Commons, J. E. Mazur, & J. A. Nevin (Eds.), An adjusting procedure for studying delayed reinforcement (pp. 55–73). Mahwah, NJ: Erlbaum.
Mazur, J. E. (2007). Rats’ choices between one and two delayed reinforcers. Learning & Behavior, 35(3), 169–176.
Mazur, J. E., & Biondi, D. R. (2009). Delay‐amount tradeoffs in choices by pigeons and rats: Hyperbolic versus exponential discounting. Journal of the Experimental Analysis of Behavior, 91(2), 197–211.
Mitchell, M. R., Mendez, I. A., Vokes, C. M., Damborsky, J. C., Winzer-Serhan, U. H., & Setlow, B. (2012). Effects of developmental nicotine exposure in rats on decision making in adulthood. Behavioural Pharmacology, 23(1), 34.
Mitchell, M. R., Vokes, C. M., Blankenship, A. L., Simon, N. W., & Setlow, B. (2011). Effects of acute administration of nicotine, amphetamine, diazepam, morphine, and ethanol on risky decision-making in rats. Psychopharmacology, 218(4), 703–712.
Myerson, J., & Green, L. (1995). Discounting of delayed rewards: Models of individual choice. Journal of the Experimental Analysis of Behavior, 64(3), 263–276.
Oberlin, B. G., & Grahame, N. J. (2009). High‐alcohol preferring mice are more impulsive than low‐alcohol preferring mice as measured in the delay discounting task. Alcoholism: Clinical and Experimental Research, 33(7), 1294–1303.
Odum, A. L., Madden, G. J., & Bickel, W. K. (2002). Discounting of delayed health gains and losses by current, never-and ex-smokers of cigarettes. Nicotine & Tobacco Research, 4(3), 295–303.
Odum, A. L., & Rainaud, C. P. (2003). Discounting of delayed hypothetical money, alcohol, and food. Behavioural Processes, 64(3), 305–313.
Orsini, C. A., Mitchell, M. R., Heshmati, S. C., Shimp, K. G., Spurrell, M., Bizon, J. L., & Setlow, B. (2017). Effects of nucleus accumbens amphetamine administration on performance in a delay discounting task. Behavioural Brain Research, 321, 130–136.
Orsini, C. A., Trotta, R. T., Bizon, J. L., & Setlow, B. (2015). Dissociable roles for the basolateral amygdala and orbitofrontal cortex in decision-making under risk of punishment. Journal of Neuroscience, 35(4), 1368–1379.
Orsini, C. A., Willis, M. L., Gilbert, R. J., Bizon, J. L., & Setlow, B. (2016). Sex differences in a rat model of risky decision making. Behavioral Neuroscience, 130(1), 50.
Ostaszewski, P., & Karzel, K. (2002). Discounting of delayed and probabilistic losses of different amounts. European Psychologist, 7(4), 295.
Pinkston, J. W., & Lamb, R. (2011). Delay discounting in C57BL/6J and DBA/2J mice: Adolescent-limited and life-persistent patterns of impulsivity. Behavioral Neuroscience, 125(2), 194.
Pope, D. A., Newland, M. C., & Hutsell, B. A. (2015). Delay‐specific stimuli and genotype interact to determine temporal discounting in a rapid‐acquisition procedure. Journal of the Experimental Analysis of Behavior, 103(3), 450–471.
Rachlin, H. (2006). Notes on discounting. Journal of the Experimental Analysis of Behavior, 85(3), 425–435.
Rachlin, H., & Herrnstein, R. (1969). Hedonism revisited: On the negative law of effect. Punishment and Aversive Behavior, 1(9), 83–109.
Reynolds, B. (2006). A review of delay-discounting research with humans: Relations to drug use and Gambling. Behavioural Pharmacology, 17(8), 651–667.
Reynolds, B., De Wit, H., & Richards, J. B. (2002). Delay of gratification and delay discounting in rats. Behavioural Processes, 59(3), 157–168.
Richards, J. B., Mitchell, S. H., Wit, H., & Seiden, L. S. (1997). Determination of discount functions in rats with an adjusting‐amount procedure. Journal of the Experimental Analysis of Behavior, 67(3), 353–366.
Shimp, K. G., Mitchell, M. R., Beas, B. S., Bizon, J. L., & Setlow, B. (2015). Affective and cognitive mechanisms of risky decision making. Neurobiology of Learning and Memory, 117, 60–70.
Simon, N. W., Gilbert, R. J., Mayse, J. D., Bizon, J. L., & Setlow, B. (2009). Balancing risk and reward: A rat model of risky decision making. Neuropsychopharmacology, 34(10), 2208–2217.
Simon, N. W., Montgomery, K. S., Beas, B. S., Mitchell, M. R., LaSarge, C. L., Mendez, I. A., . . . Haberman, R. P. (2011). Dopaminergic modulation of risky decision-making. Journal of Neuroscience, 31(48), 17460–17470.
Slezak, J. M., & Anderson, K. G. (2009). Effects of variable training, signaled and unsignaled delays, and d-amphetamine on delay-discounting functions. Behavioural Pharmacology, 20(5/6), 424–436.
Vanderveldt, A., Oliveira, L., & Green, L. (2016). Delay discounting: Pigeon, rat, human—Does it matter? Journal of Experimental Psychology: Animal Learning and Cognition, 42(2), 141.
Weller, R. E., Cook, E. W., Avsar, K. B., & Cox, J. E. (2008). Obese women show greater delay discounting than healthy-weight women. Appetite, 51(3), 563–569.
Wilson, V. B., Mitchell, S. H., Musser, E. D., Schmitt, C. F., & Nigg, J. T. (2011). Delay discounting of reward in ADHD: Application in young children. Journal of Child Psychology and Psychiatry, 52(3), 256–264.
Winstanley, C. A., Theobald, D. E., Cardinal, R. N., & Robbins, T. W. (2004). Contrasting roles of basolateral amygdala and orbitofrontal cortex in impulsive choice. Journal of Neuroscience, 24(20), 4718–4722.
Woolverton, W. L., Freeman, K. B., Myerson, J., & Green, L. (2012). Suppression of cocaine self-administration in monkeys: Effects of delayed punishment. Psychopharmacology, 220(3), 509–517.
Woolverton, W. L., Myerson, J., & Green, L. (2007). Delay discounting of cocaine by rhesus monkeys. Experimental and Clinical Psychopharmacology, 15(3), 238.
Acknowledgements
This research was supported by Grants 167016 from CONACYT and IN306415 from PAPIIT. We thank Fernando Salinas for technical assistance, Lourdes Valencia for useful comments on a previous version, and Adriana Rincon, Rodrigo Alba, Enrique Rivera, and Ithandehui Jaimes for assistance in data collection.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rodríguez, W., Bouzas, A. & Orduña, V. Temporal discounting of aversive consequences in rats. Learn Behav 46, 38–48 (2018). https://doi.org/10.3758/s13420-017-0279-9
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13420-017-0279-9