Skip to main content

Umbrella menu

  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Latest Articles
    • Issue Archive
    • Editorials
    • Research Highlights
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • EDITORIAL BOARD
  • BLOG
  • ABOUT
    • Overview
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

User menu

  • My alerts

Search

  • Advanced search
eNeuro
  • My alerts

eNeuro

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Latest Articles
    • Issue Archive
    • Editorials
    • Research Highlights
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • EDITORIAL BOARD
  • BLOG
  • ABOUT
    • Overview
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
Previous
Research ArticleResearch Article: New Research - Registered Report, Cognition and Behavior

Reactivation-Dependent Amnesia for Contextual Fear Memories: Evidence for Publication Bias

Natalie Schroyens, Eric L. Sigwald, Wim Van Den Noortgate, Tom Beckers and Laura Luyten
eNeuro 18 December 2020, 8 (1) ENEURO.0108-20.2020; DOI: https://doi.org/10.1523/ENEURO.0108-20.2020
Natalie Schroyens
1Centre for the Psychology of Learning and Experimental Psychopathology, Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
2Leuven Brain Institute, Leuven 3000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Natalie Schroyens
Eric L. Sigwald
1Centre for the Psychology of Learning and Experimental Psychopathology, Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
3Laboratorio de Neuropatología Experimental, Instituto de Investigación Médica Mercedes y Martín Ferreyra-Consejo Nacional de Investigaciones Científicas y Técnicas-Universidad Nacional de Córdoba, Córdoba 5016, Argentina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Wim Van Den Noortgate
4Faculty of Psychology and Educational Sciences and ITEC (interdisciplinary research group of imec and Katholieke Universiteit Leuven), Katholieke Universiteit Leuven, Kortrijk 8500, Belgium
5Methodology of Educational Research, Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tom Beckers
1Centre for the Psychology of Learning and Experimental Psychopathology, Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
2Leuven Brain Institute, Leuven 3000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Tom Beckers
Laura Luyten
1Centre for the Psychology of Learning and Experimental Psychopathology, Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
2Leuven Brain Institute, Leuven 3000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Laura Luyten
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Research on memory reconsolidation has been booming in the last two decades, with numerous high-impact publications reporting promising amnestic interventions in rodents and humans. However, our own recently-published failed replication attempts of reactivation-dependent amnesia for fear memories in rats suggest that such amnestic effects are not always readily found and that they depend on subtle and possibly uncontrollable parameters. The discrepancy between our observations and published studies in rodents suggests that the literature in this field might be biased. The aim of the current study was to gauge the presence of publication bias in a well-delineated part of the reconsolidation literature. To this end, we performed a systematic review of the literature on reactivation-dependent amnesia for contextual fear memories in rodents, followed by a statistical assessment of publication bias in this sample. In addition, relevant researchers were contacted for unpublished results, which were included in the current analyses. The obtained results support the presence of publication bias, suggesting that the literature provides an overly optimistic overall estimate of the size and reproducibility of amnestic effects. Reactivation-dependent amnesia for contextual fear memories in rodents is thus less robust than what is projected by the literature. The moderate success of clinical studies may be in line with this conclusion, rather than reflecting translational issues. For the field to evolve, replication and non-biased publication of obtained results are essential. A set of tools that can create opportunities to increase transparency, reproducibility and credibility of research findings is provided.

  • amnesia
  • contextual fear memory
  • pharmacology
  • publication bias
  • reconsolidation
  • rodents

Significance Statement

The present study suggests that the literature on drug-induced, reactivation-dependent amnesia for contextual fear memories in rodents is biased. Such bias is problematic because it can misinform researchers when making decisions on how to optimally invest their resources. The lack of robustness of amnestic effects, in combination with the strict (but vague) conditions that are required for memory destabilization and the absence of clear explanations for some of the observed null effects, casts doubt on the potential of the proposed clinical application of postreactivation interventions. Finally, current mechanistic theories that are commonly used to explain reactivation-dependent amnesia, such as reconsolidation or state dependency, cannot predict when amnestic effects will occur.

Introduction

Amnesia for previously acquired memories can be obtained by applying certain treatments shortly before or after memory reactivation. After the first published observation of reactivation-dependent amnesia, which was obtained by giving rats an electroconvulsive shock after a brief, unreinforced re-exposure to a conditioned tone (Misanin et al., 1968), this procedure was conceptually replicated using a wide variety of experimental protocols and treatments in different species (Reichelt and Lee, 2013; Beckers and Kindt, 2017). Overall, research on reactivation-dependent amnesia, commonly referred to as “reconsolidation blockade,” has accelerated during the last two decades, with high-impact publications reporting promising amnestic interventions in rodents and humans (Nader et al., 2000; Kindt et al., 2009).

Meanwhile, studies revealed that memory reactivation does not occur each time a memory is retrieved but depends on the conditions under which the memory was acquired and retrieved (e.g., memory strength, age, type, or the amount of novelty introduced during the reactivation session; Tronson and Taylor, 2007). Apart from those controlled studies that examined limiting factors on memory destabilization, there have been only few papers reporting failures to obtain reactivation-dependent amnesia under standard conditions. Published failures mainly involved pharmacologically-induced amnesia in human participants (Bos et al., 2014; Thome et al., 2016; Schroyens et al., 2017) or the retrieval-extinction effect (Soeter and Kindt, 2011; Luyten and Beckers, 2017; Chalkia et al., 2020a), whereas the literature on pharmacologically-induced amnesia for fear memories in rodents shows robust and consistent amnestic effects, with hardly any failures to replicate.

In our lab, we aimed to further investigate opportunities for the clinical application of reactivation-dependent amnesia for fear memories. To this end, we set out to replicate published studies in which systemic drug injection after unreinforced re-exposure to a conditioned stimulus (CS) in rats resulted in amnesia for contextual or cued fear memories (Schroyens et al., 2019a,b; Luyten et al., 2020). In contrast to what is reported in the literature, our extensive series of conceptual and exact replication attempts, performed by several experimenters and in different laboratories, did not provide clear evidence for reactivation-dependent amnesia. In fact, we could only reproduce the amnestic effect when the experimenter from the original study (now a member of our research group) conducted the study in the original lab, with animals purchased from the local supplier [see Table 1 for an overview of our exact replication attempts using contextual fear conditioning and midazolam (MDZ)]. Given that we tested a wide range of behavioral parameters and (sometimes exactly) adhered to the standard protocols that have been typically used in the literature, it is unlikely that our negative results can be explained by previously established limiting factors on memory destabilization.

View this table:
  • View inline
  • View popup
Table 1

Overview of our exact replication attempts using contextual fear conditioning and postreactivation systemic MDZ injection in male Wistar rats

Overall, it can be concluded that the experimental evidence obtained in our replication attempts is not in line with the general representation of amnesia by postreactivation systemic drug injection in the literature. Our five failed exact replication attempts using contextual fear and MDZ (Table 1) suggest that the outcome of the procedure depends on delicate, unknown, and possibly uncontrollable parameters. Therefore, it seems unlikely that the high rate of large amnestic effects that is portrayed by the current literature is a reliable representation of actual observations. Based on the discrepancy between our results and those from published studies, we suspect that (1) amnestic effects are less easily replicated than what is currently suggested by the literature and thus (2) the large effect sizes that are reported in the literature are merely a subset of the range of effect sizes that have effectively been observed. We hypothesize that these issues arise from the omission of negative findings in the published literature (i.e., reporting and publication bias).

The main aim of the current paper was to assess whether indeed the literature on pharmacologically-induced reactivation-dependent amnesia for contextual fear memories in rodents shows evidence of publication bias. The first part of this project, which we completed before preregistration, consisted of an exploratory assessment of publication bias in the sample of published studies that used postreactivation systemic injection of MDZ. Given that our ultimate aim was to investigate whether publication bias applies to the field in a broader sense, rather than for just one (systemically injected) drug, we performed a preregistered systematic review of the literature on pharmacologically-induced reactivation-dependent amnesia for contextual fear memories in rodents. Publication bias in this larger sample was assessed statistically, and relevant researchers were contacted to enquire about and request unpublished datasets. The obtained results contribute to a clearer view on the robustness of reactivation-dependent amnesia for contextual fear memories in rodents.

Materials and Methods

Relevant datasets, R scripts and overview tables of all included studies can be found on the Open Science Framework (OSF) at https://osf.io/apu9t/ (DOI 10.17 605/OSF.IO/APU9T).

Systematic literature review

We performed a literature search through the online database of PubMed using the Boolean search terms ‘(context OR contextual) AND (fear OR aversive OR threat) AND (memory OR learning) AND (reconsolid* OR reactivat* OR destabili*)’ to look for relevant published papers concerning drug-induced reactivation-dependent amnesia for contextual fear memories in rodents. After obtaining in-principle acceptance for the present study, the systematic review was registered at PROSPERO, in which we further specified that “pharmacological manipulations” do not entail genetic manipulations and, given that we investigate “reactivation-dependent amnesia,” we only consider treatments that were aimed at inducing amnesia (these criteria were implied, but not explicitly mentioned, in the Stage 1 Registered Report).

Inclusion criteria

Experiments were included when meeting all of the following criteria (related to each element of the PICO framework):

(1) Population. Rats or mice of either sex were used.

(2) Intervention. Contextual fear conditioning [i.e., one or multiple unsignaled shock(s) administered in the training context] and, afterward, a pharmacological manipulation was applied once before or after a brief unreinforced re-exposure to the training context that is commonly referred to as “contextual fear memory reactivation.” Experiments were included regardless of the mode of drug administration.

(3) Control group. A negative control group was included, in which subjects received a memory reactivation session combined with vehicle administration, or in which the drug of interest was administered without receiving a memory reactivation session. If multiple negative control groups were used, the most-commonly used control was considered, which appeared to be the vehicle control. Experiments that did not include such a control group, but, for example, only a positive control condition (i.e., in which the treatment of interest is compared with a “gold standard” treatment) were not included in the meta-analyses because of a lack of appropriate control.

(4) Outcomes. A behavioral measure of fear or anxiety (e.g., freezing) was included during drug-free testing for long-term memory retention (at least one day after reactivation). If multiple tests were performed, only the results of the first drug-free long-term retention test were included.

(5) Studies for which we were unable to calculate the effect size from reported graphs or statistics are addressed in the paper but not included in the meta-analyses.

We excluded from the meta-analysis those “boundary” conditions in which amnesia is not expected to occur based on theoretical considerations and prior empirical observations concerning reactivation-dependent amnesia. As mentioned in the introduction, it is established that reactivation-dependent amnesia occurs only under certain theoretically-grounded circumstances. For example, it has been found that the success of obtaining reactivation-dependent amnesia depends on memory-related characteristics (such as its age or strength), the use of (stressful) interventions before learning, the conditions under which memory is retrieved (e.g., properties and duration of the reactivation session), the timing of drug application (e.g., not too long before/after the reactivation session), and the time of retention testing (e.g., amnesia is not expected to be observed immediately after the intervention). Importantly, we did not aim to investigate the presence of null findings obtained under those boundary conditions. Rather, we wanted to assess whether negative results have been obtained (and possibly suppressed) in situations where amnesia was expected to occur (i.e., under standard conditions). However, given that these limiting conditions are not absolute (i.e., they can be overcome and seem to interact with each other), it is impossible to predefine a comprehensive set of exact conditions in which amnesia is (not) expected to occur. Therefore, the experimental parameters of all experiments that fulfill the criteria stated above (see ‘Inclusion criteria’) were summarized in an overview table and reviewed independently by two other researchers to select relevant studies to be included in the meta-analyses. Both researchers have experience in the topic, were blinded for study outcome, and judged inclusion based on the guiding principles listed below. Given the widely accepted boundary conditions for fear memory destabilization, memories should be recent (<7 d) at the time of reactivation, and the reactivation session should take less than two times the duration of the training session. For studies that explicitly aimed to investigate conditions that were expected to impede reactivation-dependent amnesia (such as, for example, stress manipulations before learning), only the “positive control” condition (in which the effect was expected to occur) are included, whereas the conditions under investigation are excluded (regardless of their outcome, given that the selecting researchers were blinded). Negative control conditions that are commonly used in the investigation of amnesia, such as delayed treatment application (>1 h after termination of the reactivation session) or short-term memory tests, were excluded from the meta-analysis. Nevertheless, as mentioned earlier, any conditions that met the first four inclusion criteria listed in the previous paragraph were included in an overview table (https://osf.io/x2pkq/). This way, a thorough overview of all adopted experimental parameters and boundary conditions is provided.

Articles were selected based on the abstract, and the methods section was screened as well if the abstract provided no information on the inclusion of a contextual fear memory procedure and/or insufficient information regarding drug application. Afterwards, the full text of the selected articles was screened to further assess eligibility (see https://osf.io/qebtd/ for a detailed overview of the review process). The summary table providing detailed information on experimental parameters for all included studies can be found at https://osf.io/sjwbd/. Based on this information, two blinded researchers further selected conditions for inclusion in the meta-analysis. For the purpose of restricting the amount of papers to be included in the meta-analysis and the number of researchers to be contacted (see ‘Acquiring unpublished data’ below), we limited the scope of the meta-analysis to the most commonly-used drugs to induce reactivation-dependent amnesia for contextual fear memories. Therefore, only studies that met all above-mentioned inclusion criteria (see ‘Inclusion criteria’) and used drugs that appeared in five or more research articles [i.e., anisomycin (ANI), MDZ, MK-801, and propranolol (PROP)] were included in the meta-analyses.

Calculation of Hedges’ g

Means and SDs were estimated from reported descriptive and test statistics or from reported graphs using WebPlotDigitizer (Rohatgi, 2019). If only the overall sample size of a study was provided and the group sizes could not be derived, we assumed that subjects were equally divided among the groups. Hedges’ g (with correction for small-sample bias) and corresponding SE were calculated based on our estimates from means, SDs, and group sizes using the metafor package in R. The Stage 1 version of the Registered Report mentioned “Cohen’s d” instead of “Hedges’ g.” However, we decided to use Hedges’ g (which is the default output of the adopted escalc function of the metafor package when calculating standardized mean differences), because this measure corrects for small-sample bias. In any case, we did compare both measures of effect sizes for all included studies and found highly similar estimates.

Meta-analysis

We used the metafor package in R to fit meta-analytic random-effects models, using restricted maximum likelihood (Viechtbauer, 2010). Measures of between-study variation include τ2, I2, and Cochran’s Q test. Research group and amnestic drug were included as moderators in case of significant between-study heterogeneity. Importantly, rather than estimating the size of the amnestic effect or investigating moderators, our goal was to assess whether the overall sample of published studies is subject to publication bias.

Publication bias

The funnel plot, in which the effect estimate for each study (here, the standardized mean difference) is plotted against a measure of precision of that study (here, the SE of the standardized mean difference as suggested by Sterne and Egger, 2001), is a primary visual tool to assess publication and other biases (Sterne et al., 2005; Peters et al., 2008). Observed effect sizes such as standardized mean differences are unbiased estimates of the population effect size regardless of the sample size, but the effect sizes obtained by studies with relatively small precision are in general more variable than those from studies with higher precision. As a result, in the absence of bias, those small-precision studies (i.e., lying at the bottom of the plot) are expected to scatter more widely compared with large-precision studies (lying at the top of the plot), resulting in a symmetrical funnel-shape of the dots in the plot. However, if small studies with non-significant results remain unreported or unpublished, we can expect a gap (located at the left bottom side in case of a positive true effect size) and the funnel shape can thus become asymmetrical. Egger’s linear regression approach was used to assess such plot asymmetry. We used a weighted regression of the effect estimates on their SEs, including a multiplicative dispersion parameter (Sterne and Egger, 2005).

Although funnel plots and Egger’s regression are standard tools for the assessment of publication bias, it should be noted that publication bias is not the only possible cause of funnel plot asymmetry or a relationship between study precision and effect size (Sterne et al., 2005). For example, between-study heterogeneity in itself may lead to funnel plot asymmetry because of an accidental correlation between precision and effect size or because of a confounding effect of study characteristics. Such heterogeneity can pose a challenge for funnel plot interpretation. Consider, for example, the research group in which the experiments were performed: certain environmental or methodological differences between research groups may lead to differences in both observed effect sizes and precision of studies (researchers that obtain large effects might evolve to using smaller samples in their future studies). In order to account for such heterogeneity in observed effect sizes, research group was included as a moderator. The meta-analytic model without moderators was used for the creation of the funnel plots (which allowed for plotting of the raw effect sizes rather than their residual values), whereas the model with moderators was used for Egger’s linear regression (allowing to statistically test for funnel plot asymmetry after accounting for the influence of the moderator(s) included in the meta-analytic model). For the sake of completeness, results of all regression models (with and without moderators) can be found at https://osf.io/zshwx/.

Apart from publication bias and genuine between-study heterogeneity, other sources of reporting bias (e.g., selective outcome or analysis reporting), suboptimal design and/or analyses used in smaller studies, and artefactual sampling variance may also lead to non-asymmetric funnel plots (Sterne et al., 2011). One way to discriminate publication bias as a source of asymmetry in funnel plots from other factors is by using contour-enhanced funnel plots (Peters et al., 2008). Contour-enhanced funnel plots, in which levels of statistical significance are displayed (i.e., <0.01, <0.05, and <0.1), were therefore used to visualize whether publication bias is a likely factor contributing to funnel plot asymmetry (Peters et al., 2008).

Acquiring unpublished data

All corresponding authors from the selected articles and other relevant researchers were contacted via E-mail to enquire about and request unpublished datasets. In addition, announcements were spread using StudySwap (Chartier et al., 2018), conference mailing lists, and social media (Twitter, ResearchGate, etc.). Obtained unpublished datasets that met the inclusion criteria stated above (see ‘Inclusion criteria’) were included in funnel plots to get an indication of the precision, obtained effect sizes, and statistical significance of these unpublished results. In addition, Egger’s regression was repeated using the total sample that includes published as well as unpublished datasets.

Pilot data

Before preregistration of the current study, we completed some exploratory analyses. In the course of reporting some of our replication efforts (Schroyens et al., 2019a), we performed a thorough literature search for studies that, like in our experiments, had used contextual fear conditioning and postreactivation systemic injection of MDZ to induce amnesia for a previously acquired fear memory in adult rats. We found 15 published papers (until April 2019; see Table 2) and conducted a random-effects meta-analysis using this sample (adhering to the inclusion criteria and statistical analyses outlined in the methods section above). The same analyses were also performed on datasets from our own replication studies (Schroyens et al., 2019a,b), in which highly similar procedures and parameters were used.

View this table:
  • View inline
  • View popup
Table 2

Experiments (until April 2019) from 15 different papers included in our pilot analyses investigating amnestic effects of postreactivation MDZ administration for contextual fear conditioning in adult rats under standard conditions (in chronological order based on publication date)

Each of the 15 published papers contained at least one study in which an amnestic effect was found. Some papers included conditions that aimed to test limiting factors on reactivation-dependent amnesia, i.e., (stressful) interventions before learning (Zhang and Cranney, 2008; Bustos et al., 2010; Ortiz et al., 2015; Espejo et al., 2016, 2017), the use of remote fear memories (Bustos et al., 2009), reactivation durations that yield inadequate levels of prediction error (Alfei et al., 2015), or drug injection outside the reconsolidation time window (Bustos et al., 2006; Stern et al., 2012). Based on the inclusion criteria described in the methods section, those conditions in which amnestic effects were not hypothesized to occur, were not included in the present meta-analysis given that we aimed to study the occurrence of reactivation-dependent amnesia under optimal standard conditions. Included studies in which no amnestic effects were found used either relatively brief (i.e., 1 or 1.5 min) or long (i.e., 10 min) reactivation sessions. A complete overview of experimental parameters adopted in each of the studies can be found on our OSF page at https://osf.io/sjwbd/. If multiple intervention groups were compared with the same control group, the intervention groups were combined into a single group as recommended by Higgins and Green (2011).

Published studies from other research groups versus our own replication attempts using MDZ: a first indication of publication bias

An extensive literature search for studies using contextual fear conditioning and systemic MDZ injection after memory reactivation revealed 15 papers, containing a total of 33 comparisons (postreactivation MDZ vs SAL) that fulfilled the standard conditions for memory destabilization (Table 2). Visual inspection of the funnel plot including these experiments suggests asymmetry (Fig. 1, left panel). The random-effects meta-analysis on this sample (k = 33, total N = 549) showed considerable between-study heterogeneity [Q(32) = 164.10; p < 0.001; τ2 = 1.76 (SE = 0.55) [0.98; 3.45]; I2 = 81.84% [71.45; 89.83]], implying differences between studies beyond those to be expected by chance. Given such heterogeneity, and as preregistered, research group was included in the model as a moderator. We found that the effect sizes plotted in Figure 1, left panel, depended on the research group in which the experiment was performed [i.e., research group was a statistically significant moderator of effect size; QM(5) = 82.35, p < 0.001]. Nevertheless, residual heterogeneity remained significant [QE(28) = 124.78; p < 0.001; τ2 = 1.53 (SE = 0.53) [0.78; 3.15]; I2 = 78.40% [64.92; 88.24]], suggesting that reported effect sizes differ significantly between research groups, but these between-group differences cannot fully explain all of the observed heterogeneity between studies. When statistically assessing the relationship between the effect estimates and their SEs (i.e., the relation represented in the funnel plots), we used the meta-analytic model with the moderator to test for funnel plot asymmetry after accounting for the influence of research group. Doing so, Egger’s test provided statistical evidence for funnel plot asymmetry (t(27) = 5.02; p < 0.001), which can be an indication of publication bias.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Funnel plots including published studies (left panel) and our own replication studies (right panel) in which MDZ was used as amnestic agent. Each point represents an observed effect size Hedges’ g against its SE. Visual inspection of the plot on the right panel shows that our replication studies are symmetrically scattered around the effect estimate of 0.04, indicating that the estimated effect size is close to zero and suggesting that no trend in one particular direction was observed across studies. In contrast, the plot of published studies (left panel) clearly shows asymmetry, and the reported effect sizes seem to depend strongly on the research group in which the studies were performed (represented by the different symbols in the left plot). Egger’s test confirmed plot asymmetry (p < 0.0001), even when considering the moderating influence of research group. One should be careful to attach value to the estimated effect size shown in the left funnel plot, given the evidence for publication bias and because the nesting of studies within research groups is not accounted for. The funnel plots were based on the meta-analytic models without moderators. Symbols represent the research group in which each study was performed (left panel) or the lab space that was used (right panel). Note that three of our exact replication studies (right panel) were performed in the same lab space as some of the original, published studies (left panel).

A similar random-effects meta-analysis on the replication studies from our group (k = 27, N = 324) showed different results (Fig. 1, right panel). No signficant between-study heterogeneity was observed [Q(26) = 34.13; p = 0.132; τ2 = 0.08 (SE = 0.12) [0.00; 0.60]; I2 = 19.34% [0.00; 62.95]]. Visual inspection of this plot shows a different pattern than the one obtained for previously published studies, as our studies seem to be scattered more symetrically. Nevertheless, Egger’s regression test indicated a significant negative relationship between the effect estimates and their SEs (t(25) = −2.46; p = 0.021), possibly because of the use of a small sample size of 4 rats/group in a few of our studies (i.e., those represented on the bottom left of the graph) providing inaccurate effect estimates.

The funnel plot and Egger’s test thus clearly reveal asymmetry in the published studies, which might indicate publication bias. One way to discriminate publication bias as a source of asymmetry in funnel plots from other factors is by using contour-enhanced funnel plots (Peters et al., 2008). The contour-enhanced funnel plot indicates that nearly all published studies report significant results (i.e., studies plotted to the right of the white area are statistically significant in a one-tailed test; Fig. 2, left). The fact that studies seem to be missing in the white area of the plot suggests that suppression of non-significant results is likely a factor contributing to funnel plot asymmetry. Again, this plot is in stark contrast to the one displaying our replication studies, in which most studies yielded non-significant results (Fig. 2, right).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

The contour-enhanced funnel plot of published studies (left panel) suggests publication bias. Published studies (left panel) are missing in the white area and the region to the left of the white area (where non-significant results would be plotted). This pattern adds credibility to the possibility that funnel plot asymmetry is caused by publication bias based on statistical significance (Peters et al., 2008). As a comparison, our own (mainly non-significant) replication studies are plotted in the right graph. Symbols represent the research group in which each study was performed (left panel) or the lab space that was used (right panel). The white area and the region to the left of the white area contain non-significant one-sided p values (white region: p values between 0.05 and 0.95; dark gray-shaded region: p values between 0.95 and 0.975; medium gray-shaded region: p values between 0.975 and 0.995, light gray-shaded region outside of the funnel: p values > 0.995); areas to the right of the white area represent statistically significant one-sided p values (dark gray-shaded region: p values between 0.025 and 0.05; medium gray-shaded region: p values between 0.005 and 0.025, light gray-shaded region outside of the funnel: p values below 0.005).

Published studies from other research groups versus our replication attempts using MDZ: the subtle nature of reactivation-dependent amnesia

Comparing both funnel plots (Fig. 1) not only suggests publication bias, but also illustrates that the effect sizes obtained in our studies were never as large as in published research. As mentioned earlier, we hardly found statistical evidence for the presence of (large) amnestic effects, while published studies show quite the opposite pattern; they suggest that negative results are rarely obtained in this field (Fig. 2). This highlights that, even in our exact replication attempts, there were inherent differences between our own and published studies that determined the success of the intervention. In line with this observation, the meta-analysis on the sample of published studies showed that the research group in which experiments were performed significantly affected the obtained effect size (see previous paragraph). Such dependence highlights the subtle nature of reactivation-dependent amnesia and raises the question whether other research groups also conducted unsuccessful attempts to obtain amnestic effect (but never published those findings). The results of our preregistered analyses (see results section below) address this question in depth and provide more insight into the overall robustness of postreactivation amnesia for contextual fear memories in rodents.

Results

Funnel plots and Egger’s regression suggest publication bias

The preregistered systematic PubMed search identified 304 articles, 89 of which met our inclusion criteria. The wide range of drugs that has been used with the purpose of inducing reactivation-dependent amnesia for contextual fear memories in rodents is shown in Table 3 (systemic administration) and Table 4 (intracranial administration). The scope of the meta-analysis was narrowed down to reactivation-dependent amnesia induction under standard conditions and with commonly-used amnestic drugs (i.e., those that appeared in five or more of the identified research articles, which were found to be ANI, MDZ, PROP, and MK-801). “Standard” conditions were defined based on theoretical considerations in line with the fear memory reconsolidation account (see Inclusion criteria). The final sample that was included in the preregistered meta-analysis consisted of 52 research articles, containing a total of 77 experiments and 95 drug-vehicle comparisons. It should be noted that one of those papers, i.e., Espejo et al. (2016), was not identified via the systematic PubMed search, but given that the study was already included in the pilot study, we also included it in the present analyses. In addition, we did not include Schroyens et al. (2009a,b; those were included in separate analyses; see Figs. 1, 2, right panels) as we aimed to review the literature that originated from outside our own research group. A detailed overview of experimental parameters used in the studies that were included in the meta-analyses can be found at https://osf.io/sjwbd/.

View this table:
  • View inline
  • View popup
Table 3

Overview of studies in which pharmacological agents were administered systemically with the aim of inducing reactivation-dependent amnesia for contextual fear memories in rodents, as identified by the systematic review

View this table:
  • View inline
  • View popup
Table 4

Overview of studies in which pharmacological agents were administered intracranially with the aim of inducing reactivation-dependent amnesia for contextual fear memories in rodents, as identified by the systematic review

The random-effects meta-analysis on this sample (k = 95, N = 1896) showed heterogeneity in effect estimates between studies [i.e., variation in effect estimates beyond chance; Q(94) = 334.08; p < 0.001; τ2 = 0.80 (SE = 0.16) [0.61; 1.43]; I2 = 75.26% [69.93; 84.44]]. Because of this statistical heterogeneity, and as preregistered, amnestic drug and research group were included as moderators, and were found to be significant [QM(33) = 68.64, p < 0.001]. Nevertheless, residual heterogeneity (after considering those moderators) remained significant [QE(61) = 158.14; p < 0.001; τ2 = 0.53 (SE = 0.15) [0.32; 1.07]; I2 = 64.97% [53.02; 79.12]]. I2, the percentage of the variability in effect estimates that is because of heterogeneity rather than chance, decreased from 75% (considerable) to 65% (substantial) after inclusion of the moderators (Deeks et al., 2019). The funnel plot that includes all 95 drug-vehicle comparisons suggests asymmetry (Fig. 3), which was confirmed statistically by Egger’s test (t(60) = 5.04, p < 0.001 for the model including drug and research group as moderators). As mentioned before, one way to distinguish publication bias from other sources of asymmetry is by adding contours of statistical significance to the funnel plot. Such a contour-enhanced funnel including all published drug-vehicle comparisons (Fig. 4) illustrates that studies are missing in the area of statistical non-significance, adding credibility to publication bias being a source of asymmetry. In addition, effect sizes are most densely plotted in the gray area at the border of statistical significance, which might suggest a biased distribution of effect sizes (Simonsohn et al., 2014). Overall, the results based on all selected published studies (using ANI, MDZ, PROP, or MK-801) are in line with those from our pilot study (which only included MDZ), as evidence for publication bias was observed in both sets of analyses.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Funnel plot including published studies suggests biased effect sizes. The asymmetrical funnel shape observed here was statistically confirmed by Egger’s regression (p < 0.001) and is suggestive of biased study outcomes because of selection of significant results for publication.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Contour-enhanced funnel plot including published studies suggests publication bias. The white area and the region on its left side contain studies with statistically non-significant amnestic effects based on one-tailed tests (drug < control; p ≥ 0.05). The plot suggests that non-significant studies are missing in the literature (i.e., publication bias). Remarkably, effect sizes are most densely plotted at the border of statistical significance, which might also imply biased effect sizes.

The majority of the included published drug-vehicle comparisons (i.e., 82%) was reported as statistically significant, and over 90% of the published papers concluded that amnesia could be obtained under at least some of the applied standard conditions (i.e., conditions in which the amnestic effect is expected to occur based on theoretical considerations; see Inclusion criteria). From the 12 published papers reporting non-significant amnestic effects under standard conditions, eight papers did find amnesia in some of the applied standard conditions (i.e., when changing the duration of the reactivation session, when administering ANI instead of PROP, or when infusing the amnestic drug into a different brain area), which leaves a total of four papers (including six comparisons) that found no amnestic effect under standard conditions whatsoever. Most of them did, however, obtain amnesia when using multiple injections (albeit temporarily; Lattal and Abel, 2004), when using a knock-out mice model (Yamada et al., 2009), or when postreactivation MK-801 injection was preceded by prereactivation injection of the cannabinoid CB1 receptor agonist arachidonyl-2-chloroethylamide (ACEA; Lee and Flavell, 2014). Only one of the included papers reported an overall failure to induce amnesia (using PROP; Careaga et al., 2015).

Funnel plots and Egger’s regression suggest publication bias when excluding MDZ studies

Below, we report the results of additional analyses that were not part of the preregistered analysis plan, but that allow for a clearer interpretation of the current findings. Visual inspection of the funnel plot including all studies (Figs. 3, 4) seems to suggest that MDZ studies (plotted in black) strongly contribute to the asymmetrical funnel shape, or, in other words, to the observed correlation between the effect sizes and their SEs suggestive of publication bias. Therefore, we exploratorily repeated the analyses excluding the MDZ studies, to assess whether the same conclusions would still hold when solely looking at the three other amnestic drugs. In addition, we repeated the analyses for each drug separately.

The plot with ANI, PROP, and MK-801 studies (i.e., excluding MDZ; Fig. 5) still showed an asymmetrical funnel shape, which was confirmed statistically by Egger’s regression, even when taking into account the moderating influence of drug and research group (t(32) = 2.41, p = 0.022), albeit to a lesser extent compared with when MDZ was included. When inspecting the results for each drug individually [see “5. Funnel plots per Drug (exploratory analyses)” at https://osf.io/zshwx/], asymmetrical funnel shapes were observed for MDZ, ANI and PROP, but not for MK-801. In addition, asymmetry was no longer observed for ANI (t(12) = 0.95, p = 0.362) or PROP (t(10) = 1.04, p = 0.321) when including research group as a moderator, implying evidence for overall asymmetry for ANI and PROP, but no evidence for asymmetry within research groups.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Contour-enhanced funnel plot still suggests publication bias when MDZ studies are excluded. The white area and the area on its left side contain studies with statistically non-significant amnestic effects based on one-tailed tests (drug < control; p ≥ 0.05). The asymmetrical funnel shape observed here was statistically confirmed by Egger’s regression (p < 0.001; or with Research Group and Drug as moderators: p = 0.022; exploratory analysis) and is suggestive of biased study outcomes because of selection of significant results for publication.

Unpublished data contain proportionally more failures to replicate than published data

We contacted all corresponding authors from the research articles that were included in the meta-analysis and sent out an E-mail to the Pavlovian Society mailing list to enquire about and request unpublished datasets. In addition, a request for unpublished data were posted on StudySwap (https://osf.io/98dr6/wiki/home/) and ResearchGate (https://bit.ly/34xllde). Figure 6 provides an overview of the received responses.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Replies to our request for unpublished data (clustered per research group).

Most researchers did not reply to our emails or replied that they did not have any unpublished data available (note that those two “replies” together comprised 60% of the responses). Some researchers provided information about unpublished studies that did not meet all our inclusion criteria. For example, three researchers replied having unpublished reconsolidation data from studies using cued fear memories. There were also three cases in which contextual fear conditioning was used, but the reactivation session was too long for inclusion (i.e., longer than two times the duration of the training session). For two of these, the outcome was also shared, one in which amnestic effects of PROP and MDZ were found and another one in which no effect of PROP was found. In another series of studies, an excluded amnestic agent, i.e., cycloheximide, was used. A dose of cycloheximide that was found to affect retention when injected after conditioning did not induce amnesia when given after a memory reactivation session despite varying the parameters of training (0.5- or 0.7-mA shocks) and reactivation (3 or 5 min) in a series of four studies described in an undergraduate student’s report (Zacouteguy Boos et al., 2013). Researchers from three different research groups reported to have an (extensive) series of unpublished studies meeting all our inclusion criteria but wished not to share the data for inclusion in the current analyses. Finally, three researchers (from three different research groups) offered to share their data but did not manage to timely access and/or send those data. We did receive unpublished data that could be included in the current meta-analyses from seven researchers from five different research groups (a total of 12 drug-vehicle comparisons). Importantly, the amount of unpublished data that we could include in the current manuscript is less than half of all the unpublished data disclosed to exist to us by the contacted researchers. Overall, it appears that statistically non-significant results from reconsolidation studies in rodents are less likely to be published and, in some cases, researchers were unable or reluctant to share such “negative” data for the current paper.

The obtained unpublished studies that met all our inclusion criteria are plotted in combination with the published data (Fig. 7) and alone (Fig. 8). A total of 12 drug-vehicle comparisons was included, in which either MK-801 (six studies), PROP (three studies), or MDZ (three studies) was administered before or after a contextual fear memory reactivation session. One (MK-801) study contained two intervention groups that were compared with the same control group, so the intervention groups were combined into a single group as recommended by Higgins and Green (2011). A detailed overview of the adopted parameters of those studies can be found at https://osf.io/gfwrj/. The funnel plot including all studies (Fig. 7) still shows asymmetry after inclusion of the obtained unpublished data (t(70) = 5.63, p < 0.001; with drug and research group as moderators). This was not unexpected given that we probably did not track down all existing unpublished data and because a large part of the unpublished data that we did uncover were eventually not shared by the authors for inclusion in the current paper. Importantly, studies that previously remained unpublished show smaller and mostly statistically insignificant effect sizes compared with those reported in the literature (Fig. 7). Although the limited amount of unpublished data does not allow for robust conclusions, the symmetrical funnel shape that is observed when plotting unpublished datasets only (Fig. 8) suggests no evidence for bias in our sample of unpublished studies (t(5) = –0.34, p = 0.751). Only 20% of the shared unpublished experiments found a statistically significant amnestic effect (significance based on one-sided t tests), while around 80% of the published experiments reported statistically significant amnestic effects. An exploratory Fisher’s exact probability test suggested that those proportions between published and unpublished studies were significantly different (p < 0.001). Note that any conclusions drawn from such a comparison should be interpreted with caution, given that the obtained unpublished data may not be representative of all existing unpublished data. An exploratory (i.e., not preregistered) random-effects meta-analysis with publication status (i.e., whether a study was published or unpublished) as a moderator revealed that publication status significantly moderated the size of amnestic effects [QM(1) = 17.06, p < 0.001].

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Contour-enhanced funnel plot of published (filled circles) and unpublished (empty circles) studies. In contrast to published results (95 drug-vehicle comparisons), studies that remained unpublished (12 drug-vehicle comparisons) showed smaller, and mostly non-significant, amnestic effects. This discrepancy between published and unpublished results is in line with the presence of publication bias that was suggested by the funnel plots. The majority of unpublished “negative” studies of which the existence was revealed could not be included in the current study because of author preferences.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Contour-enhanced funnel plot of unpublished studies. A relatively symmetrical pattern is observed and, in line with this observation, Egger’s regression suggests that there is no statistically significant evidence for asymmetry (t(10) = 0.70, p = 0.499).

Discussion

Our own extensive experience with drug-induced, reactivation-dependent amnesia for contextual fear memories in rats (Schroyens et al., 2019a,b) suggested that amnestic effects are not easily found, even when performing well-powered, exact replication attempts of published “positive” studies and trying out a wide variety of experimental parameters in several different laboratories. Of note, we also failed to obtain amnestic effects using behavioral or pharmacological interventions for cued fear memories in rats (Luyten and Beckers, 2017; Luyten et al., 2020) or healthy human participants (Schroyens et al., 2017; Chalkia et al., 2019, 2020a). Those observations, although corroborated by personal communication with experts in the field, were in stark contrast with the published literature, which contains a plethora of significant (mostly large) amnestic effects and hardly any negative results. This discrepancy inspired us to formally investigate publication bias.

We performed a systematic PubMed search and selected studies that aimed to induce reactivation-dependent amnesia for contextual fear memories in rodents under standard conditions with a commonly-used amnestic drug (i.e., ANI, MDZ, PROP, or MK-801; see above, Inclusion criteria). The majority of the 95 included published drug-vehicle comparisons (i.e., 80%) was reported as statistically significant and funnel plots and Egger’s linear regression provided evidence for publication bias in this sample. Only one of the included papers reported an overall failure to induce amnesia (Careaga et al., 2015). In contrast, the data that we received from previously unpublished studies mostly consisted of “negative” findings, as around 80% did not find a statistically significant amnestic effect. This discrepancy between published and unpublished results further supports the presence of publication bias. It should be mentioned that part of the unpublished experiments that we were informed of could not be included in the current study due the inability or reluctance of some of researchers to share relevant information about their unpublished findings. In any case, the current results suggest that the literature on reactivation-dependent amnesia for contextual fear memories in rodents is biased.

Possible sources of publication bias can be found at different stages, such as author submission, peer review or editorial decisions (in which the journal’s policy may play a role; Song et al., 2009). Authors’ decisions not to submit their negative results for publication can result from (1) the fact that such results are considered unimportant, (2) fear of debunking own previously-published results, theories or conclusions, or (3) the expectation of rejection by (prestigious) journals. Importantly, in the presence of publication bias, the published studies as a whole do not provide solid evidence concerning the reliability of reactivation-dependent amnesia. Selective publication of research findings depending on their statistical outcome results in the literature painting an overly optimistic picture, with misleading overall estimates of the size and replicability of amnestic effects. This false image, in turn, may result in researchers investing time and resources on an effect that seemed to be robust but may turn out to be non-replicable or, at least, difficult to replicate.

Based on the evidence for publication bias provided here and the results of our empirical studies in which no evidence for reactivation-dependent amnesia was obtained (Schroyens et al., 2019a,b), we do not claim that such phenomenon for contextual fear memories in rodents does not exist, nor do we intend to doubt the veracity of the published studies included here; but we do conclude that drug-induced reactivation-dependent amnesia for contextual fear memories in rodents is far less robust than what is projected by the existing literature. In light of other empirical studies from our and other labs that reported failures to replicate, the same may apply to cued fear memories in rodents (Luyten and Beckers, 2017; Luyten et al., 2020) and healthy humans (Bos et al., 2014; Thome et al., 2016; Schroyens et al., 2017; Chalkia et al., 2019, 2020a). We want to point out that the intuitive reasoning of an effect being truly existent based on it being reported many times can be problematic, as it has been suggested that such counting ignores reporting bias, selection bias and questionable research practices (Hedges and Olkin, 1985; Vadillo et al., 2016). Likewise, non-significant results should be interpreted as the absence of evidence for rather than the evidence of absence of a treatment effect (Taleb, 2007) and the observation of a statistically non-significant result should not be equalized with the underlying theory being wrong (Meehl, 1990).

It is good to note that publication bias is probably by no means unique to the reconsolidation field; it is likely to hinder accurate estimation of effect sizes for many other (behavioral) phenomena as well. In this paper, we focused on a delineated part of the reconsolidation literature to systematically investigate publication bias, allowing us to illustrate the existence and pervasiveness of publication bias in this particular research domain. The obtained results provide us with a clearer view on the potential translational value of reactivation-dependent amnesia for fear memories. We strongly believe that other research areas may also benefit from systematic investigations that (dis)confirm (1) the existence of publication bias and, if applicable, (2) shed light on its extent.

It should be noted that publication bias is only part of the story. Our own failures to exactly replicate prior “positive” studies already suggested that study outcome could depend on the lab in which the study was performed, or at least, that the outcome depends on subtle and unknown factors that differ between labs. In line with our experiences, the current meta-analysis suggested that the size of the amnestic effect depends on the research group in which the experiment was performed. Nevertheless, also within research group and amnestic drug, statistically significant between-study heterogeneity was observed, suggesting that observed effect sizes show differences beyond those to be expected by chance. Such heterogeneity indicates that the size of the amnestic effect, even under standard conditions, is expected to vary significantly. In combination with the current evidence for publication bias and the range of identified null findings, this implies that the outcome of postreactivation amnestic treatments is unpredictable. In addition, dominant theories (e.g., reconsolidation, state dependency) in their current form are unable to pinpoint which factors exactly influence the occurrence and size of reactivation-dependent amnestic effects. The need to define moderators of amnestic effects that has often been mentioned in reply to replication failures might be interesting for further development or refinement of theories on reactivation-dependent amnesia provided that data-driven moderators are also empirically tested.

The lack of robustness of reactivation-dependent amnesia, in combination with the strict (but vague) conditions that are required for memory destabilization and the absence of clear explanations for some of the observed null effects, cast doubt on the potential of the proposed clinical application of postreactivation interventions for the treatment of phobias or posttraumatic stress disorder (PTSD). Indeed, studies in (sub)clinical samples have not been entirely convincing (Brunet et al., 2011; Wood et al., 2015; Kindt and van Emmerik, 2016; Elsey et al., 2020; for an overview, see Beckers and Kindt, 2017). Importantly, the mixed results obtained in clinical studies might not reflect issues with translation from basic to clinical science but may simply reflect the lack of robustness of results obtained in basic research and illustrate the lack of insight in the optimal and boundary conditions for reactivation-dependent memory interference.

For the field to evolve, replication and non-biased publication of obtained results is essential. The classical publication system clearly favors the publication of novel or “positive” results, but there is a set of valuable new tools that create opportunities to increase transparency, reproducibility and credibility of research findings. For example, documentation of hypotheses, research design, and/or planned analyses on a public repository before commencing data collection, referred to as “preregistration,” ensures a clear distinction between hypotheses and/or analysis plans that were formulated before versus after observing the results and can be made publicly accessible on paper publication (Nosek et al., 2018). The OSF is an online platform that can be used for such preregistration and for the sharing of data, analyses scripts, etc. (http://osf.io). Making datasets and analysis scripts publicly available provides the opportunity to be transparent and enhance credibility of one’s obtained results and conclusions (Klein et al., 2018). Nevertheless, while valuable, those tools mostly provide a means to an end, as verification of agreement between registered and performed analyses must be assured and analytic reproducibility of published results needs to be checked. One valuable publication format in this regard is the Verification Report, in which authors of an empirical article reanalyze the original study data using the reported analyses to verify whether the same conclusion can be drawn as those reported in the original article (Chambers, 2020; see Chalkia et al., 2020b for an example from the reconsolidation field). Finally, the use of Registered Reports, in which in-principle acceptance for publication is granted before data collection for a study commences, assures inclusion of study results in the published record on the basis of quality of the methods, regardless of a study’s outcome (Hardwicke and Ioannidis, 2018). This format removes the pressure to come up with statistically significant findings for publication and prevents publication bias (https://cos.io/rr provides helpful guidelines for the submission of a Registered Report and an extensive list of participating journals). Researchers can thus take the opportunity of using those tools to increase transparency and reproducibility, both of which are essential for the reconsolidation field (and empirical science in general) to move forward.

Acknowledgments

Acknowledgements: We thank all researchers with whom we corresponded during our quest for unpublished data for their time and effort to look up their data.

Footnotes

  • The authors declare no competing financial interests.

  • This work is supported by the European Research Council Consolidator Grant 648176 (to T.B.) and a Doctoral Fellowship of the Research Foundation – Flanders (FWO), Belgium, (Grant 1114018N; to N.S.).

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. Akagi K, Yamada M, Saitoh A, Oka JI, Yamada M (2018) Post-reexposure administration of riluzole attenuates the reconsolidation of conditioned fear memory in rats. Neuropharmacology 131:1–10. doi:10.1016/j.neuropharm.2017.12.009 pmid:29225045
    OpenUrlCrossRefPubMed
  2. ↵
    Alfei JM, Ferrer Monti RI, Molina VA, Bueno AM, Urcelay GP (2015) Prediction error and trace dominance determine the fate of fear memories after post-training manipulations. Learn Mem 22:385–400. doi:10.1101/lm.038513.115 pmid:26179232
    OpenUrlAbstract/FREE Full Text
  3. ↵
    Beckers T, Kindt M (2017) Memory reconsolidation interference as an emerging treatment for emotional disorders: strengths, limitations, challenges, and opportunities. Annu Rev Clin Psychol 13:99–121. doi:10.1146/annurev-clinpsy-032816-045209 pmid:28375725
    OpenUrlCrossRefPubMed
  4. ↵
    Bos MGN, Beckers T, Kindt M (2014) Noradrenergic blockade of memory reconsolidation: a failure to reduce conditioned fear responding. Front Behav Neurosci 8:412. doi:10.3389/fnbeh.2014.00412 pmid:25506319
    OpenUrlCrossRefPubMed
  5. ↵
    Brunet A, Poundja J, Tremblay J, Bui E, Thomas E, Orr SP, Azzoug A, Birmes P, Pitman RK (2011) Trauma reactivation under the influence of propranolol decreases posttraumatic stress symptoms and disorder: 3 open-label trials. J Clin Psychopharmacol 31:547–550. doi:10.1097/JCP.0b013e318222f360 pmid:21720237
    OpenUrlCrossRefPubMed
  6. ↵
    Bustos SG, Maldonado H, Molina VA (2006) Midazolam disrupts fear memory reconsolidation. Neuroscience 139:831–842. doi:10.1016/j.neuroscience.2005.12.064 pmid:16542779
    OpenUrlCrossRefPubMed
  7. ↵
    Bustos SG, Maldonado H, Molina VA (2009) Disruptive effect of midazolam on fear memory reconsolidation: decisive influence of reactivation time span and memory age. Neuropsychopharmacology 34:446–457. doi:10.1038/npp.2008.75 pmid:18509330
    OpenUrlCrossRefPubMed
  8. ↵
    Bustos SG, Giachero M, Maldonado H, Molina VA (2010) Previous stress attenuates the susceptibility to midazolam’s disruptive effect on fear memory reconsolidation: influence of pre-reactivation D-cycloserine administration. Neuropsychopharmacology 35:1097–1108. doi:10.1038/npp.2009.215 pmid:20043007
    OpenUrlCrossRefPubMed
  9. ↵
    Careaga MBL, Tiba PA, Ota SM, Suchecki D (2015) Pre-test metyrapone impairs memory recall in fear conditioning tasks: lack of interaction with β-adrenergic activity. Front Behav Neurosci 9:51.
    OpenUrl
  10. ↵
    Chalkia A, Weermeijer J, Van Oudenhove L, Beckers T (2019) Acute but not permanent effects of propranolol on fear memory expression in humans. Front Hum Neurosci 13:51. doi:10.3389/fnhum.2019.00051
    OpenUrlCrossRef
  11. ↵
    Chalkia A, Schroyens N, Leng L, Vanhasbroeck N, Zenses A-K, Van Oudenhove L, Beckers T (2020a) No persistent attenuation of fear memories in humans: a registered replication of the reactivation-extinction effect. Cortex 129:496–509. doi:10.1016/j.cortex.2020.04.017 pmid:32580869
    OpenUrlCrossRefPubMed
  12. ↵
    Chalkia A, Van Oudenhove L, Beckers T (2020b) Preventing the return of fear in humans using reconsolidation update mechanisms: a verification report of Schiller et al. (2010). Cortex 129:510–525. doi:10.1016/j.cortex.2020.03.031 pmid:32563517
    OpenUrlCrossRefPubMed
  13. ↵
    Chambers CD (2020) Verification reports: a new article type at Cortex. Cortex 129:A1–A3. doi:10.1016/j.cortex.2020.04.020 pmid:32563516
    OpenUrlCrossRefPubMed
  14. ↵
    Chartier CR, Riegelman A, McCarthy RJ (2018) StudySwap: a platform for interlab replication, collaboration, and resource exchange. Adv Methods Pract Psychol Sci 1:574–579. doi:10.1177/2515245918808767
    OpenUrlCrossRef
  15. Couto-Pereira NS, Lampert C, Vieira A dos S, Lazzaretti C, Kincheski GC, Espejo PJ, Molina VA, Quillfeldt JA, Dalmaz C (2019) Resilience and vulnerability to trauma: early life interventions modulate aversive memory reconsolidation in the dorsal hippocampus. Front Mol Neurosci 12:134. doi:10.3389/fnmol.2019.00134 pmid:31191245
    OpenUrlCrossRefPubMed
  16. De Oliveira Alvares L, Crestani AP, Cassini LF, Haubrich J, Santana F, Quillfeldt JA (2013) Reactivation enables memory updating, precision-keeping and strengthening: exploring the possible biological roles of reconsolidation. Neuroscience 244:42–48. doi:10.1016/j.neuroscience.2013.04.005 pmid:23587841
    OpenUrlCrossRefPubMed
  17. ↵
    Deeks JJ, Higgins JPT, Altman DG (2019) Analysing data and undertaking meta-analyses. In: Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Chap 10. London: Cochrane.
  18. ↵
    Elsey JWB, Filmer AI, Galvin HR, Kurath JD, Vossoughi L, Thomander LS, Zavodnik M, Kindt M (2020) Reconsolidation-based treatment for fear of public speaking: a systematic pilot study using propranolol. Transl Psychiatry 10:179. doi:10.1038/s41398-020-0857-z pmid:32499503
    OpenUrlCrossRefPubMed
  19. ↵
    Espejo PJ, Ortiz V, Martijena ID, Molina VA (2016) Stress-induced resistance to the fear memory labilization/reconsolidation process. Involvement of the basolateral amygdala complex. Neuropharmacology 109:349–356. doi:10.1016/j.neuropharm.2016.06.033 pmid:27378335
    OpenUrlCrossRefPubMed
  20. ↵
    Espejo PJ, Ortiz V, Martijena ID, Molina VA (2017) GABAergic signaling within the basolateral amygdala complex modulates resistance to the labilization/reconsolidation process. Neurobiol Learn Mem 144:166–173. doi:10.1016/j.nlm.2017.06.004 pmid:28669783
    OpenUrlCrossRefPubMed
  21. Ferrer Monti RI, Giachero M, Alfei JM, Bueno AM, Cuadra G, Molina VA (2016) An appetitive experience after fear memory destabilization attenuates fear retention: involvement GluN2B-NMDA receptors in the basolateral amygdala complex. Learn Mem 23:465–478. doi:10.1101/lm.042564.116 pmid:27531837
    OpenUrlAbstract/FREE Full Text
  22. Ferrer Monti RI, Alfei JM, Mugnaini M, Bueno AM, Beckers T, Urcelay GP, Molina VA (2017) A comparison of behavioral and pharmacological interventions to attenuate reactivated fear memories. Learn Mem 24:369–374. doi:10.1101/lm.045385.117 pmid:28716956
    OpenUrlAbstract/FREE Full Text
  23. Franzen JM, Giachero M, Bertoglio LJ (2019) Dissociating retrieval-dependent contextual aversive memory processes in female rats: are there cycle-dependent differences? Neuroscience 406:542–553. doi:10.1016/j.neuroscience.2019.03.035 pmid:30935981
    OpenUrlCrossRefPubMed
  24. ↵
    Hardwicke TE, Ioannidis JPA (2018) Mapping the universe of registered reports. Nat Hum Behav 2:793–796. doi:10.1038/s41562-018-0444-y pmid:31558810
    OpenUrlCrossRefPubMed
  25. ↵
    Hedges LV, Olkin I (1985) Vote-counting methods. In: Statistical methods for meta-analysis, pp 47–74. San Diego: Elsevier.
  26. ↵
    Higgins JPT, Green S (2011) Data extraction for continuous outcomes. In: Cochrane handbook for systematic reviews of interventions. London: Cochrane.
  27. ↵
    Kindt M, Soeter M, Vervliet B (2009) Beyond extinction: Erasing human fear responses and preventing the return of fear. Nature Neuroscience 12:256–258.
    OpenUrlCrossRefPubMed
  28. ↵
    Kindt M, van Emmerik A (2016) New avenues for treating emotional memory disorders: towards a reconsolidation intervention for posttraumatic stress disorder. Ther Adv Psychopharmacol 6:283–295. doi:10.1177/2045125316644541 pmid:27536348
    OpenUrlCrossRefPubMed
  29. ↵
    Klein O, Hardwicke TE, Aust F, Breuer J, Danielsson H, Hofelich Mohr A, Ijzerman H, Nilsonne G, Vanpaemel W, Frank MC (2018) A practical guide for transparency in psychological science. Collabra Psychol 4:20. doi:10.1525/collabra.158
    OpenUrlCrossRef
  30. ↵
    Lattal KM, Abel T (2004) Behavioral impairments caused by injections of the protein synthesis inhibitor anisomycin after contextual retrieval reverse with time. Proc Natl Acad Sci USA 101:4667–4672. doi:10.1073/pnas.0306546101 pmid:15070775
    OpenUrlAbstract/FREE Full Text
  31. ↵
    Lee JLC, Flavell CR (2014) Inhibition and enhancement of contextual fear memory destabilization. Front Behav Neurosci 8:144. doi:10.3389/fnbeh.2014.00144 pmid:24808841
    OpenUrlCrossRefPubMed
  32. ↵
    Luyten L, Beckers T (2017) A preregistered, direct replication attempt of the retrieval-extinction effect in cued fear conditioning in rats. Neurobiol Learn Mem 144:208–215. doi:10.1016/j.nlm.2017.07.014 pmid:28765085
    OpenUrlCrossRefPubMed
  33. ↵
    Luyten L, Schnell AE, Schroyens N, Beckers T (2020) Rats remember: lack of drug-induced post-retrieval amnesia for auditory fear memories. bioRxiv. doi: https://doi.org/10.1101/2020.07.08.193383.
  34. ↵
    Meehl PE (1990) Appraising and amending theories: the strategy of lakatosian defense and two principles that warrant it. Psychol Inquiry 1:108–141. doi:10.1207/s15327965pli0102_1
    OpenUrlCrossRef
  35. ↵
    Misanin JR, Miller RR, Lewis DJ (1968) Retrograde amnesia produced by electroconvulsive shock after reactivation of a consolidated memory trace. Science 160:554–555. doi:10.1126/science.160.3827.554 pmid:5689415
    OpenUrlAbstract/FREE Full Text
  36. ↵
    Nader K, Schafe GE, Le Doux J (2000) Fear memories require protein synthesis in the amygdala for reconsolidation after retrieval. Nature 406:722–726.
    OpenUrlCrossRefPubMed
  37. ↵
    Nosek BA, Ebersole CR, DeHaven AC, Mellor DT (2018) The preregistration revolution. Proc Natl Acad Sci USA 115:2600–2606. doi:10.1073/pnas.1708274114 pmid:29531091
    OpenUrlAbstract/FREE Full Text
  38. ↵
    Ortiz V, Giachero M, Espejo PJ, Molina VA, Martijena ID (2015) The effect of midazolam and propranolol on fear memory reconsolidation in ethanol-withdrawn rats: influence of d-cycloserine. Int J Neuropsychopharmacol 18:pyu082.
    OpenUrlCrossRefPubMed
  39. ↵
    Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L (2008) Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. J Clin Epidemiol 61:991–996. doi:10.1016/j.jclinepi.2007.11.010
    OpenUrlCrossRefPubMed
  40. Pineyro ME, Ferrer Monti RI, Alfei JM, Bueno AM, Urcelay GP (2013) Memory destabilization is critical for the success of the reactivation-extinction procedure. Learn Mem 21:785–793. doi:10.1101/lm.032714.113
    OpenUrlCrossRefPubMed
  41. ↵
    Reichelt AC, Lee JLC (2013) Memory reconsolidation in aversive and appetitive settings. Front Behav Neurosci 7:118.
    OpenUrlCrossRefPubMed
  42. ↵
    Rohatgi A (2019) WebPlotDigitizer. Pacifica. Available from https://automeris.io/WebPlotDigitizer.
  43. Saitoh A, Akagi K, Oka J-I, Yamada M (2017) Post-reexposure administration of d-cycloserine facilitates reconsolidation of contextual conditioned fear memory in rats. J Neural Transm (Vienna) 124:583–587. doi:10.1007/s00702-017-1704-0 pmid:28275863
    OpenUrlCrossRefPubMed
  44. ↵
    Schroyens N, Beckers T, Kindt M (2017) In search for boundary conditions of reconsolidation: a failure of fear memory interference. Front Behav Neurosci 11:65. doi:10.3389/fnbeh.2017.00065 pmid:28469565
    OpenUrlCrossRefPubMed
  45. ↵
    Schroyens N, Alfei JM, Schnell AE, Luyten L, Beckers T (2019a) Limited replicability of drug-induced amnesia after contextual fear memory retrieval in rats. Neurobiol Learn Mem 166:107105. doi:10.1016/j.nlm.2019.107105 pmid:31705982
    OpenUrlCrossRefPubMed
  46. ↵
    Schroyens N, Bender CL, Alfei JM, Molina VA, Luyten L, Beckers T (2019b) Post-weaning housing conditions influence freezing during contextual fear conditioning in adult rats. Behav Brain Res 359:172–180. doi:10.1016/j.bbr.2018.10.040 pmid:30391556
    OpenUrlCrossRefPubMed
  47. ↵
    Simonsohn U, Nelson LD, Simmons JP (2014) P-curve: a key to the file-drawer. J Exp Psychol Gen 143:534–547. doi:10.1037/a0033242
    OpenUrlCrossRef
  48. ↵
    Soeter M, Kindt M (2011) Disrupting reconsolidation: pharmacological and behavioral manipulations. Learn Mem 18:357–366. doi:10.1101/lm.2148511 pmid:21576515
    OpenUrlAbstract/FREE Full Text
  49. ↵
    Song F, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, Sutton AJ, Hing CB, Harvey I (2009) Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Med Res Methodol 9:79.
    OpenUrlCrossRefPubMed
  50. ↵
    Stern CAJ, Gazarini L, Takahashi RN, Guimarães FS, Bertoglio LJ (2012) On disruption of fear memory by reconsolidation blockade: evidence from cannabidiol treatment. Neuropsychopharmacol 37:2132–2142. doi:10.1038/npp.2012.63
    OpenUrlCrossRefPubMed
  51. ↵
    Sterne JAC, Egger M (2001) Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol 54:1046–1055. doi:10.1016/S0895-4356(01)00377-8
    OpenUrlCrossRefPubMed
  52. ↵
    Sterne JAC, Egger M (2005) Regression methods to detect publication and other bias in meta-analysis. In: Publication bias in meta-analysis: prevention, assessment and adjustments, pp 99–110. Hoboken: Wiley Ltd.
  53. ↵
    Sterne JAC, Becker BJ, Egger M (2005) The funnel plot. In: Publication bias in meta-analysis: prevention, assessment and adjustments, pp 75–98. Hoboken: Wiley Ltd.
  54. ↵
    Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, Carpenter J, Rücker G, Harbord RM, Schmid CH, Tetzlaff J, Deeks JJ, Peters J, Macaskill P, Schwarzer G, Duval S, Altman DG, Moher D, Higgins JPT (2011) Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 343:d4002. doi:10.1136/bmj.d4002 pmid:21784880
    OpenUrlFREE Full Text
  55. ↵
    Taleb NN (2007) The black swan: the impact of the highly improbable. New York: Random House.
  56. ↵
    Thome J, Koppe G, Hauschild S, Liebke L, Schmahl C, Lis S, Bohus M (2016) Modification of fear memory by pharmacological and behavioural interventions during reconsolidation. PLoS One 11:e0161044. doi:10.1371/journal.pone.0161044 pmid:27537364
    OpenUrlCrossRefPubMed
  57. ↵
    Tronson NC, Taylor JR (2007) Molecular mechanisms of memory reconsolidation. Nat Rev Neurosci 8:262–275. doi:10.1038/nrn2090 pmid:17342174
    OpenUrlCrossRefPubMed
  58. ↵
    Vadillo MA, Hardwicke TE, Shanks DR (2016) Selection bias, vote counting, and money-priming effects: a comment on Rohrer, Pashler, and Harris (2015) and Vohs (2015). J Exp Psychol Gen 145:655–663. doi:10.1037/xge0000157 pmid:27077759
    OpenUrlCrossRefPubMed
  59. ↵
    Viechtbauer W (2010) Conducting meta-analyses in R with the metafor package. J Stat Soft 36.
  60. ↵
    Wood NE, Rosasco ML, Suris AM, Spring JD, Marin M-F, Lasko NB, Goetz JM, Fischer AM, Orr SP, Pitman RK (2015) Pharmacological blockade of memory reconsolidation in posttraumatic stress disorder: three negative psychophysiological studies. Psychiatry Res 225:31–39. doi:10.1016/j.psychres.2014.09.005 pmid:25441015
    OpenUrlCrossRefPubMed
  61. ↵
    Yamada D, Zushida K, Wada K, Sekiguchi M (2009) Pharmacological discrimination of extinction and reconsolidation of contextual fear memory by a potentiator of AMPA receptors. Neuropsychopharmacology 34:2574–2584. doi:10.1038/npp.2009.86 pmid:19675539
    OpenUrlCrossRefPubMed
  62. ↵
    Zacouteguy Boos F, de Oliveira Alvares L, Quillfeldt JA (2013) Repadronizacão de protocolo na tarefa de condicionamento aversivo ao contexto para emprego com nova cepa de ratos. Porto Alegre: Universidade Federal do Rio Grande do Sul.
  63. ↵
    Zhang S, Cranney J (2008) The role of GABA and anxiety in the reconsolidation of conditioned fear. Behav Neurosci 122:1295–1305. doi:10.1037/a0013273 pmid:19045949
    OpenUrlCrossRefPubMed

Synthesis

Reviewing Editor: Laura Bradfield, University of Technology Sydney

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: NONE.

Thank you for your additions, changes, and revisions, this manuscript is now in great shape. Apologies for the delay in making this decision - one of the original reviewers was unable to re-review it so I wanted to make sure I had a chance to look over it myself in some detail, which I have now had a chance to do. The one reviewer that did re-review found all the issues adequately addressed and had no further comments.

As for myself, I had no major issues that need addressing, but I do have a few minor ones.

• Lines 264-265 - “was found to be a significant moderator...” moderator of the heterogeneity? The position on the funnel plot? If this could be clarified in text it would help me understand it.

• Line 272-273 - “there was a significant positive relationship between the effect estimates and their standard errors” - please clarify what this indicates in text.

• Line 384. I wonder if having the word ‘reminder‘ in parentheses is necessary? If not please remove it otherwise it comes across a little accusatory.

• Likewise for lines 403-404, starting with and, in some cases...”. Does this need to be said? I’m open to an argument for its inclusion, I just wonder if it comes across as implying that the researchers did not want a paper published suggesting that the effect was not robust. Unless we explicitly know that this is the case (because they said so) then I think it’s best to leave out this passage. There could be any number of reasons why a researcher may not share unpublished results (maybe a grad student made a mistake? Maybe a fire alarm went off in the middle of the experiment - both things that have happened to me!). I will note that where this is listed as a possibile reason for why researchers did not share their data in the discussion is fine and doesn’t seem accusatory in the same manner.

• I think I mentioned this on the review of the first stage registered report, but I again have the concern that should we take any behavioural effect and ask people for their unpublished data, we are going to find evidence of publication bias and a less than robust effect that what appears from published data. Data that remains unpublished is, by its nature, more likely to be non-significant. I remember that the authors provided some rebuttal to this point, but I would still like some kind of acknowledgement in the discussion that future studies might reveal a similar lack of robustness for other behavioural effects should a similar process to that undertaken here be applied to it. Off the top of my head, I know I had a hard time getting overexpectation to work, so I wonder if that is also a not-very-robust effect that has benefitted from publication bias. However, the authors need not be that specific - please just include a sentence somewhere in the discussion mentioning the possibility of reactivation-dependent amnesia for contextual fear memories not being the only possible effect suffering from such bias.

Author Response

Dear Dr. Bradfield,

Thank you for going through our Stage 2 manuscript in detail and for your comments and suggestions. We address your comments below, and changed the manuscript accordingly (marked in red font).

- Lines 264-265 - “was found to be a significant moderator...” moderator of the heterogeneity? The position on the funnel plot? If this could be clarified in text it would help me understand it.

We added the following information to the manuscript:

"Given such heterogeneity, and as preregistered, research group was included in the model as a moderator. We found that the effect sizes plotted in Fig. 1 (left panel) depended on the research group in which the experiment was performed (i.e., research group was a statistically significant moderator of effect size; QM(5) = 82.35, p < .001).”

- Line 272-273 - “there was a significant positive relationship between the effect estimates and their standard errors” - please clarify what this indicates in text.

We added the following information to the manuscript:

"Doing so, Egger’s test provided statistical evidence for funnel plot asymmetry (t(27) = 5.02; p < .001), which can be an indication of publication bias.”

- Line 384. I wonder if having the word ‘reminder’ in parentheses is necessary? If not please remove it otherwise it comes across a little accusatory.

We removed ‘reminder’ from the text.

- Likewise for lines 403-404, starting with and, in some cases...”. Does this need to be said? I’m open to an argument for its inclusion, I just wonder if it comes across as implying that the researchers did not want a paper published suggesting that the effect was not robust. Unless we explicitly know that this is the case (because they said so) then I think it’s best to leave out passages such as this. There could be any number of reasons why a researcher may not share unpublished results (maybe a grad student made a mistake? Maybe a fire alarm went off in the middle of the experiment - both things that have happened to me!).

We had written that “...in some cases, researchers were unable or reluctant to share such ‘negative’ data for the current paper”. In addition, in the Results section we had pointed out that “Researchers from three different research groups reported to have an (extensive) series of unpublished studies meeting all our inclusion criteria but wished not to share the data for inclusion in the current analyses” (lines 388-390).

None of those researchers explicitly said that they did not want their results to be included in our meta-analysis due to the fact that those studies illustrated that the effect was not robust. Below, we would like to elaborate on the communication that we had with those researchers that led us to include the sentences above in the manuscript. We hope that you agree that there are sufficient arguments for including this in the manuscript, but if not, please let us know.

The arguments were based on confidential conversations with the researchers involved and were removed from the published version of the rebuttal letter because the authors wished to keep this information private.

- One of the researchers wrote us that they would not share their unpublished data because they were “considering publishing them in the future” and they expressed the belief that “submitting it to your project would possibly create a bias in your analysis.”

- Another research group revealed to us (before we started working on the current meta-analysis) that they had a low success rate in replicating their own work. In addition, they mentioned that they had a large amount of unpublished data with negative results that were collected over several years (verbal communication; 2018). Upon contacting several researchers from this group with a request to include their unpublished data in our meta-analysis, they kept referring us to one another and the person with final responsibility (i.e., the current PI of the lab) did not reply to any of our requests for unpublished data.

- A third research group revealed to us via email that they have a series of more than 10 failed reconsolidation experiments with contextual fear memories in mice (just to be clear, the exact details of those studies were not revealed so we are not aware of how many studies exactly would fit our inclusion criteria). However, after quite some communication back and forth, it appeared that the authors did not want to reveal any more information or data for inclusion in our meta-analysis unless they would be included as co-authors on the current paper.

-I think I mentioned this on the review of the first stage registered report, but I again have the concern that should we take any behavioural effect and ask people for their unpublished data, we are going to find evidence of publication bias and a less than robust effect that what appears from published data. Data that remains unpublished is, by its nature, more likely to be non-significant. I remember that the authors provided some rebuttal to this point, but I would still like some kind of acknowledgement in the discussion that future studies might reveal a similar lack of robustness for other behavioural effects should a similar process to that undertaken here be applied to it. Off the top of my head, I know I had a hard time getting overexpectation to work, and it seems a highly parameter-specific effect from my experience, so I wonder if it is also a not-very-robust effect that has benefitted from publication bias (although the authors need not be that specific - please just include a sentence somewhere in the discussion mentioning the possibility of reactivation-dependent amnesia for contextual fear memories not being the only possible effect suffering from such bias).

We added the following paragraph to the manuscript (lines 474-481):

"It is good to note that publication bias is probably by no means unique to the reconsolidation field; it is likely to hinder accurate estimation of effect sizes for many other (behavioral) phenomena as well. In this paper, we focused on a delineated part of the reconsolidation literature to systematically investigate publication bias, allowing us to illustrate the existence and pervasiveness of publication bias in this particular research domain. The obtained results provide us with a clearer view on the potential translational value of reactivation-dependent amnesia for fear memories. We strongly believe that other research areas may also benefit from systematic investigations that (dis)confirm (1) the existence of publication bias and, if applicable, (2) shed light on its extent.”

View Abstract
Back to top

In this issue

eneuro: 8 (1)
eNeuro
Vol. 8, Issue 1
January/February 2021
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Reactivation-Dependent Amnesia for Contextual Fear Memories: Evidence for Publication Bias
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
Reactivation-Dependent Amnesia for Contextual Fear Memories: Evidence for Publication Bias
Natalie Schroyens, Eric L. Sigwald, Wim Van Den Noortgate, Tom Beckers, Laura Luyten
eNeuro 18 December 2020, 8 (1) ENEURO.0108-20.2020; DOI: 10.1523/ENEURO.0108-20.2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Reactivation-Dependent Amnesia for Contextual Fear Memories: Evidence for Publication Bias
Natalie Schroyens, Eric L. Sigwald, Wim Van Den Noortgate, Tom Beckers, Laura Luyten
eNeuro 18 December 2020, 8 (1) ENEURO.0108-20.2020; DOI: 10.1523/ENEURO.0108-20.2020
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • amnesia
  • contextual fear memory
  • pharmacology
  • publication bias
  • reconsolidation
  • rodents

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research - Registered Report

  • Registered Report: Transcriptional Analysis of Savings Memory Suggests Forgetting is Due to Retrieval Failure
  • Aging Effects and Test–Retest Reliability of Inhibitory Control for Saccadic Eye Movements
Show more Research Article: New Research - Registered Report

Cognition and Behavior

  • Registered Report: Transcriptional Analysis of Savings Memory Suggests Forgetting is Due to Retrieval Failure
  • Aging Effects and Test–Retest Reliability of Inhibitory Control for Saccadic Eye Movements
  • Color tuning of face-selective neurons in macaque inferior temporal cortex
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior
  • Registered Reports
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2021 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.