Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Commentary, History, Teaching, and Public Awareness

From Methods to Monographs: Fostering a Culture of Research Quality

Devon C. Crawford, Mariah L. Hoye and Shai D. Silberberg
eNeuro 8 August 2023, 10 (8) ENEURO.0247-23.2023; https://doi.org/10.1523/ENEURO.0247-23.2023
Devon C. Crawford
Office of Research Quality, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Devon C. Crawford
Mariah L. Hoye
Office of Research Quality, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shai D. Silberberg
Office of Research Quality, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Shai D. Silberberg
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

“Al estudiar las monografías de la especialidad que se desee cultivar, debemos fijarnos sobre todo en dos cosas: en los métodos de investigación de que el autor se ha servido en sus pesquisas, y en los problemas que han quedado pendientes de solución.”

[“When studying monographs from the specialty one wishes to cultivate, we should focus on two things above all: on the investigation methods that the author has used in their research, and on the problems that are still pending a solution.”]

(Ramón y Cajal, 1897) [Translation to English by Devon C. Crawford]

Introduction

Santiago Ramón y Cajal is renowned for his careful observations and highly detailed drawings of cells within the nervous system. His advice, that our primary focus when reading scholarly communications should be on the methods used and the problems yet to be solved (Ramón y Cajal, 1897), feels especially difficult in the current age. Manuscript formatting guidelines and publication conventions often cause investigators to abbreviate their research methods, leading to important information missing on the design, conduct, and analysis of a study. As a result, it is often challenging to evaluate the quality of the “investigation methods that the author has used in their research” in many research documents and thus to deduce the “problems that are still pending a solution.” With the central role that publications play in both communicating science and assessing researchers for career progression and financial support, thorough methodological reporting is necessary for others to evaluate the quality of the research. Although great scientific advances have been made over hundreds of years and continue to be made today, fully transparent reporting could accelerate progress toward fundamental biological knowledge and treatments for disease.

Understanding the factors that may have led to current practices and norms in communicating science could serve as a guide to redirect the culture of science to better emphasize rigorous and transparent research methods, as Ramón y Cajal advised. The National Institute of Neurological Disorders and Stroke (NINDS), a component of the National Institutes of Health (NIH), has been in recent years promoting enhanced attention to rigor and transparency in the scientific ecosystem via focused programs. Herein we provide historical context for the cultural issues targeted by such programs, describe these efforts, and present a vision for a future that incentivizes research quality commensurate with its vital role in scientific progress. Importantly, bolstering partnerships with the scientific community is vital for elevating scientific rigor and transparent reporting within the scientific culture so that, together, we may help improve the practice of scientific research for the benefit of all.

The Scientific Ecosystem

There are many contributors to the scientific ecosystem that have, both historically and today, played important roles in defining norms and shaping culture in how science is performed and reported (Fig. 1). Because of the deep-seeded origins of many of today’s practices (emphasized by specific references used throughout this piece), no single person or entity is responsible for creating the current ecosystem nor for changing the direction of the entire culture. Rather, because of their interdependence, each of these entities need to work together for sustained cultural change to be attainable. As stated in 2014 by the at-the-time Director of NIH Francis Collins and Principal Deputy Director of NIH Lawrence Tabak, “We are reaching out broadly to the research community, scientific publishers, universities, industry, professional organizations, patient-advocacy groups and other stakeholders to take the steps necessary to reset the self-corrective process of scientific inquiry” (Collins and Tabak, 2014).

Diagram with 5 circles, where 4 outer circles (labeled “Professional Development (icon of a person pointing at a blackboard in front of 3 others)” in black, “Publications (icon of an open book)” in dark green, “Funding (icon of dollar bills)” in magenta, and “Institutional Incentives (icon of a building with columns)” in blue) point toward the central circle (“Scientists (icon of a person in a coat with a bubbling beaker in front of them)” in orange).
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Incentives and pressures in the scientific ecosystem. Although scientists are ultimately responsible for the quality of the research they perform and how it is reported, they are also under numerous pressures from various parts of the scientific ecosystem that can negatively impact rigor and transparency. There are pressures to publish research articles (especially in high-impact journals), obtain funding (which may, in part, depend on publication history), seek professional development opportunities to advance one's career and progress through a career ladder that often uses publication and funding metrics as benchmarks of research performance. This ecosystem tends to incentivize exciting studies that generate hypotheses or incompletely explore hypotheses over carefully designed studies with high research quality and transparency that more deeply explore a scientific question.

Publishing norms

Publications have served a vital role in scientific dissemination for hundreds of years (de Solla Price, 1963; Fyfe et al., 2015) and are commonly used by institutions and funders to appraise scientists for career progression and financial support (Mckiernan et al., 2019). Thus, to supply scientists with venues to share their work, both the number of scientific periodicals and the number of articles within those periodicals have increased dramatically over time (de Solla Price, 1975; Bornmann et al., 2021).

Counterintuitively, the rapid expansion of scientific publishing in the last 300 years, including its expansion online, has not necessarily ensured the transparency of reporting important experimental design elements (Menke et al., 2020, 2022). For example, a systematic review of in vivo research studies related to neurologic disorders found that most studies did not report how sample sizes were determined or what measures, such as blinding/masking and randomization, were taken to reduce the risk of unconscious biases (Sena et al., 2014). Similar findings were found in an evaluation of preclinical cardiovascular studies (Ramirez et al., 2017) and in an evaluation of papers published by Nature Publishing Group (NPQIP Collaborative group, 2019). Despite the many scientists, publishers, and institutions who place a high value on transparent reporting in manuscripts (Landis et al., 2012; Nature Neuroscience, 2013; McNutt, 2014b; Marcus and whole Cell team, 2016) the widespread nature of this lack of transparency suggests that the culture of not reporting these items regularly will take time and effort to change. Without a detailed account of the experimental methods used in each study, it is not possible to assess the quality of the research, or reliably interpret the results. Insufficiently rigorous or transparent preclinical work can hamper translation to clinical trials, and incomplete evidence regarding experimental treatments can skew interpretation of effectiveness and harms (Turner et al., 2008; Perrin, 2014). This recognition that transparency is vital was expressed centuries ago by William Gilbert, who wrote “How very easy it is to make mistakes and errors [in interpretation] in the absence of trustworthy experiments” (Gilbert, 1600). How can research be assessed, or even built on, if one does not have access to the full experimental design details?

Researcher assessment

The level of inadequate experimental transparency in the scientific literature raises additional questions about assessment of researchers themselves. Often, the quality and transparency of “investigation methods that the author has used in their research” are not the focus of assessment criteria. Rather, key metrics commonly used by institutions and funders to appraise neuroscientists for hiring, tenure, promotion, and funding include (1) their history of grant funding, (2) the quantity of their publications, (3) the journals in which they publish, (4) the number of times these publications have been cited, and (5) how experts in the field view their work (Vale, 2012; Collins and Tabak, 2014; Begley et al., 2015; Nosek et al., 2015; Moher et al., 2018; Mckiernan et al., 2019). Although expert reviewers should pay particular attention to the rigor and transparency of the science performed, outsized attention is often awarded to the bibliometrics above because of their simplicity. The measurement of journal citation rates was first proposed by P.L.K. Gross and E.M. Gross as a way to assist university libraries in prioritizing highly cited periodicals when making decisions about periodical subscriptions (Gross and Gross, 1927), a practice that was simplified by Eugene Garfield’s popularization of the Journal Impact Factor (JIF), a normalized number of recent citations that a journal has received (Garfield and Sher, 1963; Garfield, 1972). The JIF, originally meant to help libraries identify collections of particular interest to the scientific community, was rapidly adopted as a surrogate measure for the popularity and prestige of a periodical (Schmid, 2017; Larivière and Sugimoto, 2019; Mckiernan et al., 2019). Often conflated with research quality (Mckiernan et al., 2019), such journal prestige does not provide specific information about the rigor of individual experiments nor about an author’s transparency of reporting. Nevertheless, as early as 1976, the JIF was suggested as a tool to “establish publication records for an individual” (Narin, 1976), although the JIF is an average number applied to a highly skewed distribution of publications (Larivière and Sugimoto, 2019). Attributing its value to individual manuscripts is illogical and was warned against by many, include Garfield himself (Garfield, 1963; Seglen, 1997).

These bibliometric norms, which were formed long ago, likely spurred, albeit inadvertently, incentives that are at odds with fully transparent reporting of rigorously conducted research (Collins and Tabak, 2014). As Goodhart’s Law states, “when a measure becomes a target, it ceases to be a good measure” (Hoskin, 1996). If scientists are routinely assessed based on publication metrics, then they naturally would feel pressure to publish frequently in journals with a high JIF. Indeed, scientists have perceived a pressure to publish under this system since at least 1928, when Clarence Case wrote, “The system of promotion used in our universities amounts to the warning, ‘Publish or perish!’” (Case, 1928). It has long been recognized that publishing in these journals provides visibility and prestige to investigators (Rowlands and Nicholas, 2005), and concentrating on submitting manuscripts to a small number of publication venues inevitably increases rejection rates, making such periodicals even more desirable; the more difficult it is to be selected for publication, the more impressive the publication is perceived to be (Reich, 2013). This pressure continues today, as career stability and grant funding, which are both key for academic scientists to stay scientists, rely heavily on evidence of continuous productivity through publication (Rice et al., 2020).

Simultaneously, these norms around JIF have historically disincentivized journals to publish articles that are predicted to receive few citations. Commercial pressures require publishers to compete for visibility in the scientific community as well as subscriptions from institutions and individuals, a pressure that has become even more acute with the rapid expansion of periodicals over time (Falagas and Alexiou, 2008; McGuigan and Russell, 2008). To keep JIF high, journals have often prioritized articles with broad interest in the form of novelty and potential scientific impact (Brumback, 2008; Falagas and Alexiou, 2008). Unexpected and exciting results, however, are often preliminary and need additional, high-quality validation. This pressure to exhibit novelty could promote incomplete studies (such as studies that, despite containing a high volume of experiments and data, do not fully investigate approaches that could refute the hypothesis or do not fully report data inconsistent with the hypothesis), as scientists fear being “scooped” and losing precedence for publishing a particular finding (Reif, 1961). In the words of Collins and Tabak, “Perhaps the most vexed issue is the academic incentive system. It currently over-emphasizes publishing in high-profile journals…[which] encourages rapid submission of research findings to the detriment of careful replication” (Collins and Tabak, 2014). The focus on volume may accidentally incentivize the building of piles of bricks rather than solid edifices, as Bernard Forscher’s 1963 allegory “Chaos in the Brickyard” posits (Forscher, 1963). If carefully explored scientific questions that led to “unexciting” and especially null findings are not prioritized equally by high-profile journals, some scientists may even decide that the most logical decision is to place such results into a “file drawer” indefinitely because of their perceived low value or difficulty in getting them published (Rosenthal, 1979). This last practice, resulting in publication bias, distorts the known body of scientific knowledge and is widespread (Kyzas et al., 2007; Turner et al., 2008; Sena et al., 2010). This environment could also potentially lead to slower career progression and lower success in obtaining funding for some very careful and rigorous scientists.

Despite the value placed on JIF, journal and individual publication citation counts have been shown either not to correlate or to correlate negatively with several dimensions of research quality (Macleod et al., 2015; Dougherty and Horne, 2022). Therefore, many suggest that researcher assessment by institutions and funders should be refocused away from JIF and other bibliometrics (Hicks et al., 2015; Schmid, 2017; Mckiernan et al., 2019; Moher et al., 2020), including Collins and Tabak: “University promotion and tenure committees must resist the temptation to use arbitrary surrogates, such as the number of publications in journals with high impact factors, when evaluating an investigator’s scientific contributions and future potential” (Collins and Tabak, 2014). The same could be said for funding review committees. Assessing research quality may very well require more time and resources to implement, but it would better identify and reward scientists who employ the most rigorous and transparent practices.

Peer review

Peer review could be regarded as one effective locus for screening publications and grant proposals for research quality. However, this relatively young practice (de Solla Price, 1975; Spier, 2002) is highly dependent on scientific peers subjected to the same messaging that promotes publication productivity and bibliometrics as major foci for assessment. While transparency in reporting one’s methods of experimental design and analysis is vital for enabling peer review of research quality in grant applications and publications, assessing said quality can be challenging. As the number of manuscripts and grant submissions continues to rise, reviewers (who volunteer their time and expertise) are likely asked to evaluate increasing numbers of documents, and thus, for the sake of time, may not be able to pay as deep attention to the “investigation methods that the author has used in their research” as they would like. The scientific complexity within individual publications has also strikingly increased in parallel with their volume (Cordero et al., 2016), possibly because of the pressures described earlier. To illustrate this, we found that the first 18 papers published in Cell (January–March 1974) had an average of 3.2 authors, 4.5 figures, and 7.7 panels per paper. In contrast, in the first issue of Cell in 2023, comprising 18 papers, these numbers were 19.9, 13.4, and 102.1, respectively. This increased length and complexity of individual publications further adds (1) time to review a single manuscript and (2) in many cases, additional required expertise to evaluate all aspects of a manuscript that individual reviewers may not have.

The expansion of experimental techniques and the growth of team science are exciting developments in the advancement of biomedical research, but they also bring additional responsibilities. Claude Bernard foresaw this contemporary issue as early as 1865: “The more complex the science, the more essential is it, in fact, to establish a good experimental standard, so as to secure comparable facts, free from sources of error” (Bernard, 1865). Although critically important for scientific discourse, the increased time commitment and challenge for peer reviewers, in addition to the competing priorities signaled by institutions and funders, means that peer review alone cannot be expected to solve issues of research quality in scientific proposals and publications. Changes to peer review to enhance attention to rigor and transparency must be part of a broader change in the culture of science.

Scientific culture

Given the forces that have been at play for many generations of scientists, shifting the focus to more explicitly emphasize research quality through rigor and transparency in scientific communications will require a multipronged approach. Institutions, publishers, funders, professional organizations, and researchers, as contributors to the biomedical landscape, will all need to play their part in this evolution (Fig. 1). Indeed, in recent years there have been several efforts to change the culture of science from many sectors of the research ecosystem, such as proposed changes to funding structures (Liu et al., 2020), publication practices (Chambers, 2013; Berg et al., 2016; Schmid, 2017; du Sert et al., 2020), institutional incentive structures (Dirnagl et al., 2016; Strech et al., 2020), and researcher education (Bosch and Casadevall, 2017; du Sert et al., 2020; Bespalov et al., 2021). To bolster and sustain such efforts, every member of the scientific ecosystem is integral to enhancing research rigor and transparency where possible.

NINDS Efforts

As a funding entity, NINDS has acknowledged its role in the scientific ecosystem and actively promoted better research practices through efforts to help shift the scientific culture (Fig. 2). As a complement to efforts outside of NINDS, these solutions are designed to disrupt long-ingrained practices and spur action toward improving scientific rigor and transparency to ensure a better future. Many sectors of the scientific ecosystem are targeted by these programs to engage the entire community and catalyze change.

Timeline of NIH and NINDS efforts from 2011 to 2023 to enhance rigor and transparency in the scientific ecosystem. At the top is black text saying, “Efforts to Enhance the Rigor & Transparency Ecosystem)”, and boxes with activities are placed to the left or right of the years going down the page. In 2011, there are 2 magenta boxes for “NINDS Notice on Enhanced Attention to Rigor in Grants (NOT-NS-11-023) )” and “NINDS Enhanced Review of Clinical Trials (Scientific Premise)”. In 2012, there are 2 dark green boxes for “NINDS Preclinical Reporting Workshop (icon of people around a table)” and “Publication on ‘Landis 4’ Reporting Guidelines (PMID: 23060188)”. In 2013, there is one magenta box for “Presentation by NINDS Director to NIH ACD on Rigor Strategic Vision (go.nih.gov/87xP6FA)”. In 2014, there is one magenta box for “NIH Publication on Rigor Plans (PMID: 2442835)”, one black box for “NINDS Participation in NIGMS Rigor Training Modules (RFA-GM-15-006)”, and one dark green box for “NIH Workshop with Publishers (go.nih.gov/Og7ryHa)”. In 2015, there is one black box for “Enhanced Training Requirements in Jointly Sponsored Neuroscience T32 (Experimental Design, Rigorous Research, Statistics, Quantitative Approaches)” and one magenta box for “Addition of Scientific Contributions to NIH Biosketch (NOT-OD-15-032)”. In 2016, there are two magenta boxes for “Enhanced Rigor Focus in NIH Application Instructions (NOT-OD-16-011)” and “Appointment of NINDS Director of Research Quality (icon of person with a plus sign in the lower right)”. In 2017, there is one black box for “Publication on Rigor in Conference Presentations (PMID: 2879622)” and one magenta box for “Publication on Rigor in NIH Grant Instructions (PMID: 28575442)”. In 2018, there is one black box for “NINDS Workshop on a Visionary Educational Resource for Research Rigor (go.nih.gov/H8nMD91)”. In 2019, there is one magenta box for “Updated Instructions for Rigor in NIH Applications (NOT-OD-18-228)”. In 2020, there is one black box for “Publication from Resource Workshop (PMID: 32127131)”, one orange box for “Launch of NINDS Rigor Champions Webpage (go.nih.gov/s2EdAZ1)”, and one black box for “Enhanced NIH F, T, & K Training Requirements (Rigorous Research Design, Quantitative Approaches, Data Analysis & Interpretation)”. In 2021, there is one black box for “Launch of NINDS Resource Initiative for Rigor Education (go.nih.gov/QM2qBxn)”. In 2022, there is one orange box for “NINDS Rigor Champions Workshop (go.nih.gov/AhYB7Do)” and one black box for “Launch of NINDS ORQ Social Media (@NINDS4Rigor)”. In 2023, there is one blue box for “Launch of NINDS Intra-Institutional Rigor Culture Initiative (STIRR) (RFA-NS-24-020)”, one orange box for “Launch of NINDS Rigor Champions Prize (NOT-NS-23-085)”, and one black box for “NINDS Professional Development Workshop at Neuroscience Meeting (Championing Rigor)”.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Key milestones in National Institute of Neurological Disorders and Stroke (NINDS) and National Institutes of Health (NIH) efforts to catalyze culture change. Timeline with examples of NINDS and trans-NIH efforts since 2011 to improve the awareness and practice of rigorous and transparent research practices. Corresponding to elements of the scientific ecosystem shown in Figure 1, magenta boxes relate to funding efforts, dark green boxes to reporting in publications, black boxes to training and professional development, orange boxes to efforts of individual rigor champions, and the blue box to institutional culture change. ACD, Advisory Committee to the NIH Director. NIGMS, National Institute of General Medical Sciences. T32, Ruth L. Kirschstein National Research Service Award Institutional Training Program. ORQ, Office of Research Quality.

Funding

Funding is a strong incentive for shaping researcher behavior. NINDS took steps to bolster research quality and transparency of “investigation methods that the author has used in their research” within grant applications by publicly declaring enhanced attention to rigor and transparency in 2011 (https://go.nih.gov/HEQ6Jem) and, around the same time, modifying NINDS-run reviews of clinical trials to pay explicit attention to the research quality of foundational work justifying the trial. This preceded NIH’s modifications to grant application instructions that increased applicant and reviewer focus on the rigor of the prior research (formerly “scientific premise”), rigor of the proposed research, biological variables, and authentication of chemical and biological reagents (https://go.nih.gov/UWck7hx, updated in https://go.nih.gov/8BwyOex). NINDS also created a dedicated Office of Research Quality (ORQ; https://go.nih.gov/kxZASSG), which is unique among the Institutes, Centers, and Offices of NIH, to promote rigorous research practices and transparent reporting within the scientific community. In addition, NINDS bolstered rigor-related language in its funding opportunities (including the review criteria) and strengthened consideration of the investigators’ track record of rigorous research when deciding whether to fund an application beyond the payline (https://go.nih.gov/tffo0yt). NINDS, along with the NIH, continues to transform approaches to funding and peer review with the goal of better identifying and promoting high-quality research.

Publications

Given that publication practices heavily influence transparency and dissemination of research, NINDS convened a workshop in 2012 with journal editors, funders, peer reviewers, and investigators to identify key practices in preclinical animal studies that should be reported. This led to the recommendation that “at a minimum, studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data” (Landis et al., 2012). Some publishers quickly adopted these guidelines (Nature, 2013; Kelner, 2013), and they and NIH subsequently held a workshop with additional editors and publishers that resulted in reporting guidelines that were widely endorsed (McNutt, 2014a). Although transparency in reporting important experimental design practices appears to have increased (NPQIP Collaborative group, 2019), transparency of such items in biomedical research overall could still use improvement (Menke et al., 2020, 2022).

Reporting guidelines, however, are just one way to encourage transparency of “investigation methods.” For example, there are some peer-reviewed journals that focus on rigor and transparency of methods rather than the outcome of experiments (Chambers and Tzavella, 2022), and NINDS is exploring additional ways to address publication bias. There has also been a recent push in the scientific community for more open science beyond traditional publications (Laakso et al., 2011), which includes such efforts as NIH encouraging dissemination of early work through preprints (https://go.nih.gov/b6CETx4) and widely sharing research data regardless of relationship to a publication (https://go.nih.gov/hgKCjqt). Open science, however, does not necessarily guarantee increased reporting of important experimental design elements, nor does it guarantee thorough review by peers. For this reason, it is vital for open science and data sharing efforts to specifically consider reporting of important metadata, such as experimental rigor and transparency practices, to help emphasize research quality alongside access to the resulting data.

Education and professional development

High levels of rigor and transparency in scientific communications can only be achieved if the community, including early career researchers, are adequately educated about the research quality issues inherent in their experiments as well as the best ways to mitigate them. If researchers do not know there are issues, or they know there are issues but not how to fix them, we cannot expect those issues to change. For this reason, efforts by NINDS to improve research quality have also emphasized the importance of and the need for training and education in the principles of research rigor (Landis et al., 2012; Koroshetz et al., 2020). For example, NINDS drove changes to the Jointly Sponsored Predoctoral Training Program in the Neurosciences (T32), which requires supported students and scholars to obtain a thorough understanding of experimental design, including the principles of experimental rigor (https://go.nih.gov/duZNsQI). Skill-building in such rigorous research design and analysis practices became an NIH-wide requirement for training grants in 2020 (https://go.nih.gov/E7qLJhT).

Despite these training requirements, NINDS noted a dearth of high-quality educational materials and programs devoted to research rigor and transparency among NINDS T32 training grant institutions in 2018 (https://go.nih.gov/6XuP424) and, therefore, held a workshop on how to improve education in the principles of rigorous research (Koroshetz et al., 2020). Following this workshop, NINDS launched an initiative to build an innovative online platform that will host educational units on fundamental principles of research rigor that can be integrated easily into training programs (https://go.nih.gov/L1D5u7N). This initiative is being driven by the scientific community, for the scientific community (https://c4r.io/), and there is one final receipt date remaining to apply for a grant to create additional educational units (https://go.nih.gov/j967x5r). This effort is still early in development, so we encourage the neuroscience community to follow its progress (for example, through ORQ’s social media, see https://twitter.com/NINDS4Rigor) and participate in future meetings or testing of the materials.

Science societies and other professional organizations are also an important locus for professional development. To this end, NINDS often partners with these groups to enhance efforts to educate and provide resources for the scientific community. For example, a 2017 NINDS-hosted roundtable with conference organizers resulted in multiple suggestions for enhanced transparency at conferences, including through adding small icons to talks and posters to signal various rigor and transparency practices with little added researcher effort (Silberberg et al., 2017). ORQ is continuing to pilot the use of such icons at additional NINDS and non-NINDS meetings to gauge interest and effectiveness. Recently, the Society for Neuroscience, which has developed educational materials on rigorous research practices through an NINDS-funded grant (https://neuronline.sfn.org/collection/foundations-of-rigorous-neuroscience-research), has added requirements for annual meeting presenters to provide information on the rigor of their studies (https://www.sfn.org/meetings/meeting-policies-and-guidelines/presenter-guidelines-and-policies-for-sfn-events). This could provide an opportunity for attendees to try out such icons. In addition, NINDS will be organizing a professional development workshop for the upcoming 2023 meeting of the Society for Neuroscience on how to better champion rigorous research practices, which will provide practical examples and advice to individual researchers about how to catalyze change within their own institutions. Education and support from professional organizations can be important catalysts for empowering scientists to shift behavior, initiate new and better practices, and spread the lessons they have learned to others.

Individual scientists and rigor champions

We encourage motivated individuals who would like to change the culture of science, by elevating the importance of careful research, to act as “rigor champions.” Such individuals with drive and vision can spark broader change than they might realize. They cannot, however, transform the scientific ecosystem alone. Communities for champions to connect with each other are needed for culture change efforts to thrive and be sustained (Orben, 2019; Stewart et al., 2022). Thus, NINDS has created avenues to catalyze these communities to connect and share experiences (https://go.nih.gov/5ZQX6Yv). For example, NINDS hosted a workshop in 2022 to better glean a path for supporting rigor champions (https://go.nih.gov/AhYB7Do), out of which came the clear message that rigor and transparency efforts should be better supported and recognized. To this end, NINDS recently launched the NINDS Rigor Champions Prize (https://www.challenge.gov/?challenge=ninds-rigor-champions-prize), which is a federal Challenge mechanism aiming to recognize individuals and small teams who have worked to enhance culture around scientific rigor and transparency. We especially encourage participation by early career researchers, institutional staff, and other members of the scientific community who do not traditionally apply for NIH grants or receive recognition for their efforts through traditional hiring and promotion criteria. The first round of winners will be announced in November 2023. Through these and similar efforts, we hope to better identify and recognize worthwhile activities already occurring in the community.

Institutional incentives

For the efforts described above to be successful, career incentives for scientists must align with valuing rigor, transparency, education, and championship of better research practices. Culture at the level of a department or institution can reinforce behaviors and attitudes that enhance or diminish adherence to such practices. Thus, to support positive culture change, NINDS recently released a funding opportunity for United States-based institutional entities that perform or support neuroscience research to create innovative programs, strategies, and approaches that incentivize research quality (https://go.nih.gov/BbnZR29). This incentivization can be achieved through infrastructure, education, adjustments in researcher assessment and recognition, policy change, or a variety of other approaches. We encourage applications that propose programs of various size or focus, so a spectrum of programs is expected. Successful, sustainable programs have the potential to spread to additional institutions, which could catalyze wider change and enhance research quality across the scientific community.

Looking Forward

Culture change to elevate the rigor and transparency of “investigation methods that the author has used in their research,” per Ramón y Cajal’s advice, can be achieved only with the help of every contributor to the scientific milieu, including scientists, educators, professional organizations, publishers, funders, and institutions (Koroshetz et al., 2020). The incentives and pressures that exist in today’s research environment, which have been deeply rooted for a long time, inevitably affect how scientists interact with the other components in the system. Thus, all must work in concert toward the common goal of improving rigor and transparency for it to be successful. Many groups and individuals have worked to change this culture, and NINDS as a funding entity has a responsibility to contribute positively to a future where the neuroscience community and the biomedical community at large embrace a collective push to solve “problems that are still pending a solution” with the highest quality of work possible. Over time, NINDS’s efforts have built on each other, with new opportunities recently announced to further this collective goal, and additional avenues to address research rigor and transparency issues and to support the scientific community will likely be needed. To emphasize its commitment, NINDS included rigor and transparency as a “cross-cutting strategy” in its 2021–2026 Strategic Plan (https://go.nih.gov/4HyXnv5), promising to “promote scientific rigor and transparency throughout all NINDS programs and policies.” NINDS’s efforts complement those by other entities trying to shift attitudes and practices (such as those by working groups of the Advisory Committee to the NIH Director; see https://go.nih.gov/uhdqIR4), and together these activities will be more effective at initiating and sustaining change. Rigor champions at all levels, fields, and positions can contribute at least a small piece to the larger puzzle, and every piece has value. The collective partnership of everyone in the scientific community is fundamental for a future that values transparent and rigorous research so that knowledge and clinical practice are built on the strongest foundation.

Footnotes

  • The authors declare no competing financial interests.

  • The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the United States Government.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Begley CG, Buchan AM, Dirnagl U (2015) Robust research: institutions must do their part for reproducibility. Nature 525:25–27. https://doi.org/10.1038/525025a pmid:26333454
    OpenUrlCrossRefPubMed
  2. ↵
    Berg JM, et al. (2016) Preprints for the life sciences. Science 352:899–901. https://doi.org/10.1126/science.aaf9133 pmid:27199406
    OpenUrlAbstract/FREE Full Text
  3. ↵
    Bernard C (1865) An introduction to the study of experimental medicine. New York: Henry Schuman, Inc.
  4. ↵
    Bespalov A, et al. (2021) Introduction to the EQIPD quality system. Elife 10:e63294. https://doi.org/10.7554/eLife.63294
    OpenUrl
  5. ↵
    Bornmann L, Haunschild R, Mutz R (2021) Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanit Soc Sci Commun 8:224.
    OpenUrl
  6. ↵
    Bosch G, Casadevall A (2017) Graduate biomedical science education needs a new philosophy. mBio 8:e01539-17. https://doi.org/10.1128/mBio.01539-17
    OpenUrlAbstract/FREE Full Text
  7. ↵
    Brumback RA (2008) Worshiping false idols: the impact factor dilemma. J Child Neurol 23:365–367. https://doi.org/10.1177/0883073808315170 pmid:18401031
    OpenUrlCrossRefPubMed
  8. ↵
    Case CM (1928) Scholarship in sociology. Sociol Soc Res 12:323–340.
    OpenUrl
  9. ↵
    Chambers CD (2013) Registered reports: a new publishing initiative at cortex. Cortex 49:609–610. https://doi.org/10.1016/j.cortex.2012.12.016 pmid:23347556
    OpenUrlCrossRefPubMed
  10. ↵
    Chambers CD, Tzavella L (2022) The past, present and future of registered reports. Nat Hum Behav 6:29–42. https://doi.org/10.1038/s41562-021-01193-7 pmid:34782730
    OpenUrlCrossRefPubMed
  11. ↵
    Collins FS, Tabak LA (2014) Policy: NIH plans to enhance reproducibility. Nature 505:612–613. https://doi.org/10.1038/505612a pmid:24482835
    OpenUrlCrossRefPubMed
  12. ↵
    Cordero RJB, de León-Rodriguez CM, Alvarado-Torres JK, Rodriguez AR, Casadevall A (2016) Life science’s average publishable unit (APU) has increased over the past two decades. PLoS One 11:e0156983. https://doi.org/10.1371/journal.pone.0156983 pmid:27310929
    OpenUrlCrossRefPubMed
  13. ↵
    de Solla Price DJ (1963) Little science, big science. New York: Columbia University Press.
  14. ↵
    de Solla Price DJ (1975) Science since Babylon. New Haven and London: Yale University Press.
  15. ↵
    Dirnagl U, Przesdzing I, Kurreck C, Major S (2016) A laboratory critical incident and error reporting system for experimental biomedicine. PLoS Biol 14:e2000705. https://doi.org/10.1371/journal.pbio.2000705 pmid:27906976
    OpenUrlPubMed
  16. ↵
    Dougherty MR, Horne Z (2022) Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences. R Soc Open Sci 9:220334. https://doi.org/10.1098/rsos.220334 pmid:35991336
    OpenUrlPubMed
  17. ↵
    du Sert NP, et al. (2020) The ARRIVE guidelines 2.0: updated guidelines for reporting animal research. PLoS Biol 18:e3000410.
    OpenUrlCrossRefPubMed
  18. ↵
    Falagas ME, Alexiou VG (2008) The top-ten in journal impact factor manipulation. Arch Immunol Ther Exp (Warsz) 56:223–226. https://doi.org/10.1007/s00005-008-0024-5 pmid:18661263
    OpenUrlCrossRefPubMed
  19. ↵
    Forscher BK (1963) Chaos in the Brickyard. Science 142:339. https://doi.org/10.1126/science.142.3590.339 pmid:17799464
    OpenUrlFREE Full Text
  20. ↵
    Fyfe A, McDougall-Waters J, Moxham N (2015) 350 years of scientific periodicals. Notes Rec R Soc Lond 69:227–239. https://doi.org/10.1098/rsnr.2015.0036 pmid:26495575
    OpenUrlCrossRefPubMed
  21. ↵
    Garfield E (1963) Citation indexes in sociological and historical research. Am Doc 14:289–291. https://doi.org/10.1002/asi.5090140405
    OpenUrl
  22. ↵
    Garfield E (1972) Citation analysis as a tool in journal evaluation. Science 178:471–479. https://doi.org/10.1126/science.178.4060.471 pmid:5079701
    OpenUrlFREE Full Text
  23. ↵
    Garfield E, Sher IH (1963) New factors in the evaluation of scientific literature through citation indexing. Am Doc 14:195–201. https://doi.org/10.1002/asi.5090140304
    OpenUrlCrossRef
  24. ↵
    Gilbert W (1600) On the loadstone and magnetic bodies and on the great magnet the earth (Translated by Mottelay PF). London: Bernard Quaritch.
  25. ↵
    Gross PL, Gross EM (1927) College libraries and chemical education. Science 66:385–389. https://doi.org/10.1126/science.66.1713.385 pmid:17782476
    OpenUrlFREE Full Text
  26. ↵
    Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I (2015) The Leiden manifesto for research metrics. Nature 520:429–431. https://doi.org/10.1038/520429a pmid:25903611
    OpenUrlCrossRefPubMed
  27. ↵
    Hoskin K (1996) The ‘awful idea of accountability’: inscribing people into the measurement of objects. In: Accountability: power, ethos and the technologies of managing (Mouritsen RMaJ, ed), pp 265–282. London: International Thomson Business Press.
  28. ↵
    Kelner KL (2013) Playing our part. Sci Transl Med 5:190ed197. https://doi.org/10.1126/scitranslmed.3006661
    OpenUrl
  29. ↵
    Koroshetz WJ, et al. (2020) Framework for advancing rigorous research. Elife 9:e55915. https://doi.org/10.7554/eLife.55915
    OpenUrl
  30. ↵
    Kyzas PA, Denaxa-Kyza D, Ioannidis JPA (2007) Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer 43:2559–2579. https://doi.org/10.1016/j.ejca.2007.08.030 pmid:17981458
    OpenUrlCrossRefPubMed
  31. ↵
    Laakso M, Welling P, Bukvova H, Nyman L, Björk BC, Hedlund T (2011) The development of open access journal publishing from 1993 to 2009. PLoS One 6:e20961. https://doi.org/10.1371/journal.pone.0020961 pmid:21695139
    OpenUrlCrossRefPubMed
  32. ↵
    Landis SC, et al. (2012) A call for transparent reporting to optimize the predictive value of preclinical research. Nature 490:187–191. https://doi.org/10.1038/nature11556 pmid:23060188
    OpenUrlCrossRefPubMed
  33. ↵
    Larivière V, Sugimoto CR (2019) The J impact factor: a brief history, critique, and discussion of adverse effects. In: Springer handbook of science and technology indicators (Glänzel W, Moed HF, Schmoch U, and Thelwall M, eds), pp 3–24. Cham: Springer International Publishing.
  34. ↵
    Liu M, Choy V, Clarke P, Barnett A, Blakely T, Pomeroy L (2020) The acceptability of using a lottery to allocate research funding: a survey of applicants. Res Integr Peer Rev 5:3. https://doi.org/10.1186/s41073-019-0089-z pmid:32025338
    OpenUrlCrossRefPubMed
  35. ↵
    Macleod MR, McLean AL, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, Hirst T, Hemblade R, Bahor Z, Nunes-Fonseca C, Potluru A, Thomson A, Baginskaite J, Egan K, Vesterinen H, Currie GL, Churilov L, Howells DW, Sena ES (2015) Risk of bias in reports of in vivo research: a focus for improvement. PLoS Biol 13:e1002273. https://doi.org/10.1371/journal.pbio.1002273 pmid:26460723
    OpenUrlCrossRefPubMed
  36. ↵
    Marcus E; whole Cell team (2016) A STAR is born. Cell 166:1059–1060. https://doi.org/10.1016/j.cell.2016.08.021 pmid:27565332
    OpenUrlCrossRefPubMed
  37. ↵
    McGuigan GS, Russell RD (2008) The business of academic publishing: a strategic analysis of the academic journal publishing industry and its impact on the future of scholarly publishing. E-JASL 9(3).
  38. ↵
    Mckiernan EC, Schimanski LA, Nieves CM, Matthias L, Niles MT, Alperin JP (2019) Use of the journal impact factor in academic review, promotion, and tenure evaluations. Elife 8:e47338. https://doi.org/10.7554/eLife.47338
    OpenUrlCrossRefPubMed
  39. ↵
    McNutt M (2014a) Journals unite for reproducibility. Science 346:679. https://doi.org/10.1126/science.aaa1724 pmid:25383411
    OpenUrlAbstract/FREE Full Text
  40. ↵
    McNutt M (2014b) Reproducibility. Science 343:229. https://doi.org/10.1126/science.1250475 pmid:24436391
    OpenUrlAbstract/FREE Full Text
  41. ↵
    Menke J, Roelandse M, Ozyurt B, Martone M, Bandrowski A (2020) The rigor and transparency index quality metric for assessing biological and medical science methods. iScience 23:101698. https://doi.org/10.1016/j.isci.2020.101698 pmid:33196023
    OpenUrlPubMed
  42. ↵
    Menke J, Eckmann P, Ozyurt IB, Roelandse M, Anderson N, Grethe J, Gamst A, Bandrowski A (2022) Establishing institutional scores with the rigor and transparency index: large-scale analysis of scientific reporting quality. J Med Internet Res 24:e37324. https://doi.org/10.2196/37324 pmid:35759334
    OpenUrlPubMed
  43. ↵
    Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN (2018) Assessing scientists for hiring, promotion, and tenure. PLoS Biol 16:e2004089. https://doi.org/10.1371/journal.pbio.2004089 pmid:29596415
    OpenUrlCrossRefPubMed
  44. ↵
    Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, Coriat AM, Foeger N, Dirnagl U (2020) The Hong Kong principles for assessing researchers: fostering research integrity. PLoS Biol 18:e3000737. https://doi.org/10.1371/journal.pbio.3000737 pmid:32673304
    OpenUrlCrossRefPubMed
  45. ↵
    Narin F (1976) Evaluative bibliometrics: the use of publication and citation analysis in the evaluation of scientific activity. Cherry Hill: Computer Horizons, Inc.
  46. ↵
    Nature (2013) Announcement: reducing our irreproducibility. Nature 496:398.
    OpenUrlCrossRef
  47. ↵
    Nature Neuroscience (2013) Raising standards. Nat Neurosci 16:517.
    OpenUrl
  48. ↵
    Nosek BA, et al. (2015) Scientific standards. Promoting an open research culture. Science 348:1422–1425. https://doi.org/10.1126/science.aab2374 pmid:26113702
    OpenUrlAbstract/FREE Full Text
  49. ↵
    NPQIP Collaborative group (2019) NPQIP Collaborative group. Did a change in Nature journals’ editorial policy for life sciences research improve reporting? BMJ Open Sci 3:e000035.
    OpenUrlPubMed
  50. ↵
    Orben A (2019) A journal club to fix science. Nature 573:465. https://doi.org/10.1038/d41586-019-02842-8 pmid:31551562
    OpenUrlPubMed
  51. ↵
    Perrin S (2014) Preclinical research: make mouse studies work. Nature 507:423–425. https://doi.org/10.1038/507423a pmid:24678540
    OpenUrlCrossRefPubMed
  52. ↵
    Ramirez FD, Motazedian P, Jung RG, Di Santo P, MacDonald ZD, Moreland R, Simard T, Clancy AA, Russo JJ, Welch VA, Wells GA, Hibbert B (2017) Methodological rigor in preclinical cardiovascular studies targets to enhance reproducibility and promote research translation. Circ Res 120:1916–1926. https://doi.org/10.1161/CIRCRESAHA.117.310628 pmid:28373349
    OpenUrlAbstract/FREE Full Text
  53. ↵
    Ramón y Cajal S (1897) Fundamentos racionales y condiciones técnicas de la investigación biológica. Madrid: Impr. de L. Aguado.
  54. ↵
    Reich ES (2013) Science publishing: the golden club. Nature 502:291–293. https://doi.org/10.1038/502291a pmid:24132273
    OpenUrlCrossRefPubMed
  55. ↵
    Reif F (1961) The competitive world of the pure scientist: the quest for prestige can cause conflict between the goals of science and the goals of the scientist. Science 134:1957–1962. https://doi.org/10.1126/science.134.3494.1957 pmid:17744407
    OpenUrlFREE Full Text
  56. ↵
    Rice DB, Raffoul H, Ioannidis JPA, Moher D (2020) Academic criteria for promotion and tenure in biomedical sciences faculties: cross sectional analysis of international sample of universities. BMJ 369:m2081. https://doi.org/10.1136/bmj.m2081 pmid:32586791
    OpenUrlAbstract/FREE Full Text
  57. ↵
    Rosenthal R (1979) The file drawer problem and tolerance for null results. Psychol Bull 86:638–641. https://doi.org/10.1037/0033-2909.86.3.638
    OpenUrlCrossRef
  58. ↵
    Rowlands I, Nicholas D (2005) Scholarly communication in the digital environment: The 2005 survey of journal author behaviour and attitudes. Aslib Proceedings 57:481–497.
    OpenUrl
  59. ↵
    Schmid SL (2017) Five years post-DORA: promoting best practices for research assessment. Mol Biol Cell 28:2941–2944. https://doi.org/10.1091/mbc.E17-08-0534 pmid:29084913
    OpenUrlAbstract/FREE Full Text
  60. ↵
    Seglen PO (1997) Why the impact factor of journals should not be used for evaluating research. BMJ 314:498–502. https://doi.org/10.1136/bmj.314.7079.497 pmid:9056804
    OpenUrlFREE Full Text
  61. ↵
    Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR (2010) Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol 8:e1000344. https://doi.org/10.1371/journal.pbio.1000344 pmid:20361022
    OpenUrlCrossRefPubMed
  62. ↵
    Sena ES, Currie GL, McCann SK, Macleod MR, Howells DW (2014) Systematic reviews and meta-analysis of preclinical studies: why perform them and how to appraise them critically. J Cereb Blood Flow Metab 34:737–742. https://doi.org/10.1038/jcbfm.2014.28 pmid:24549183
    OpenUrlCrossRefPubMed
  63. ↵
    Silberberg SD, Crawford DC, Finkelstein R, Koroshetz WJ, Blank RD, Freeze HH, Garrison HH, Seger YR (2017) Shake up conferences. Nature 548:153–154. https://doi.org/10.1038/548153a pmid:28796229
    OpenUrlPubMed
  64. ↵
    Spier R (2002) The history of the peer-review process. Trends Biotechnol 20:357–358. https://doi.org/10.1016/s0167-7799(02)01985-6 pmid:12127284
    OpenUrlCrossRefPubMed
  65. ↵
    Stewart SLK, Pennington CR, da Silva GR, Ballou N, Butler J, Dienes Z, Jay C, Rossit S, Samara A; U. K. Reproducibility Network (UKRN) Local Network Leads (2022) Reforms to improve reproducibility and quality must be coordinated across the research ecosystem: the view from the UKRN Local Network Leads. BMC Res Notes 15:58. https://doi.org/10.1186/s13104-022-05949-w pmid:35168675
    OpenUrlPubMed
  66. ↵
    Strech D, Weissgerber T, Dirnagl U; QUEST Group (2020) Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative. PLoS Biol 18:e3000576. https://doi.org/10.1371/journal.pbio.3000576 pmid:32045410
    OpenUrlPubMed
  67. ↵
    Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358:252–260. https://doi.org/10.1056/NEJMsa065779 pmid:18199864
    OpenUrlCrossRefPubMed
  68. ↵
    Vale RD (2012) Evaluating how we evaluate. Mol Biol Cell 23:3285–3289. https://doi.org/10.1091/mbc.E12-06-0490 pmid:22936699
    OpenUrlAbstract/FREE Full Text

Synthesis

Reviewing Editor: Christophe Bernard, INSERM & Institut de Neurosciences des Systèmes

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: NONE.

Please correct some references

I prefer the short legends

And it is always good to see Claude Bernard cited!

Back to top

In this issue

eneuro: 10 (8)
eNeuro
Vol. 10, Issue 8
August 2023
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
From Methods to Monographs: Fostering a Culture of Research Quality
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
From Methods to Monographs: Fostering a Culture of Research Quality
Devon C. Crawford, Mariah L. Hoye, Shai D. Silberberg
eNeuro 8 August 2023, 10 (8) ENEURO.0247-23.2023; DOI: 10.1523/ENEURO.0247-23.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
From Methods to Monographs: Fostering a Culture of Research Quality
Devon C. Crawford, Mariah L. Hoye, Shai D. Silberberg
eNeuro 8 August 2023, 10 (8) ENEURO.0247-23.2023; DOI: 10.1523/ENEURO.0247-23.2023
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Introduction
    • The Scientific Ecosystem
    • NINDS Efforts
    • Looking Forward
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Commentary

  • A Bioscience Educators’ Purpose in a Modern World
  • Reflection and Experimental Rigor Are Our AiMS: A New Metacognitive Framework for Experimental Design
  • Using Simulations to Explore Sampling Distributions: An Antidote to Hasty and Extravagant Inferences
Show more Commentary

History, Teaching, and Public Awareness

  • A Bioscience Educators’ Purpose in a Modern World
  • Reflection and Experimental Rigor Are Our AiMS: A New Metacognitive Framework for Experimental Design
  • NeuroBoricuas: A Culturally Rooted Approach to Neuroscience Outreach and Research Training
Show more History, Teaching, and Public Awareness

Subjects

  • Improving Your Neuroscience
  • History, Teaching, and Public Awareness
  • Commentaries
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.