Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleNew Research, Cognition and Behavior

Cognitive Flexibility Improves Memory for Delayed Intentions

Seth R. Koslov, Arjun Mukerji, Katlyn R. Hedgpeth and Jarrod A. Lewis-Peacock
eNeuro 10 October 2019, 6 (6) ENEURO.0250-19.2019; https://doi.org/10.1523/ENEURO.0250-19.2019
Seth R. Koslov
1Department of Psychology
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arjun Mukerji
3Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California 94720
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Katlyn R. Hedgpeth
1Department of Psychology
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jarrod A. Lewis-Peacock
1Department of Psychology
2Institute for Neuroscience, University of Texas, Austin, Texas 78712
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jarrod A. Lewis-Peacock
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Task design. Left, Ongoing task difficulty could increase or decrease every 2 s across a trial or remain fixed at the middle difficulty level (level 8 of 15). We validated the relationship between difficulty levels in a pilot study, finding that as difficulty increased, reaction time increased and accuracy decreased. For more information on pilot study 1, see Extended Data Figure 1-1. We further validated that the ongoing task could impact prospective memory strategy use in a second pilot study, where we found PM cost was significantly higher at an easy difficulty (level 4) than at a harder difficulty (level 12). For more information on pilot study 2, see Extended Data Figure 1-2. Right, In the dual-task PM experiment, participants identified the reappearance of a PM target while concurrently performing the ongoing task.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Behavioral performance. a, Ongoing task accuracy across difficulties, error ribbons ± 1 SEM. PM, Dual-task trials with a PM intention; Non-PM, ongoing task only without a PM intention. b, Ongoing task RT (correct responses only) across difficulties, error ribbons ± 1 SEM. c, PM cost (the difference between ongoing task RT for PM trials vs Non-PM trials) was computed for each participant at every difficulty level. Violin plots represent the distribution of by-participant average costs at each difficulty. PM cost is highest at easy difficulty levels (dark red) and decreases as task difficulty increases (dark blue). d, Polynomial model fits validated the use of linear models, which allowed us to calculate the shift in PM cost on each trial (for further analysis details, see text below, Extended Data Figure 2-1, Extended Data Table 2-3). Next, PM cost slopes were calculated as the change in PM cost within each trial. Violin plots show the average within-trial PM cost slopes for decreasing (Dec), fixed (Fix), and increasing (Inc) trials across participants. *p < 0.05. e, PM accuracy for each participant across trial types. f, Logistic regression bootstrap analysis linking PM cost slope to PM accuracy for decreasing (red) and increasing (blue) trials. Each individual red/blue line shows the predicted relationship for each bootstrapped sample (n = 10,000). White lines reflect the fixed-effects relationship for the original sample. *p < 0.05.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    fMRI decoding of PM intentions. a, Brain regions significantly engaged by the addition of the PM task to the OG task (GLM contrast PM > Non-PM, FDR corrected at p < 0.001; Extended Data Table 3-1, ROI list). aI, Anterior insular cortex, dACC, dorsal anterior cingulate cortex, dlPFC, dorsolateral prefrontal cortex, IPS, Intraparietal sulcus, LOC, lateral occipital cortex, mOCC, medial occipital cortex, rlPFC, rostrolateral prefrontal cortex, VTC, ventral temporal cortex. These regions were used as the initial feature mask to train and test fMRI pattern classifiers for PM intention-related activity. To more directly identify regions primarily responsible for PM intention representation during this task, we performed a surface-based searchlight analysis. That analysis indicated that the VTC and LOC were more important for PM processing (for more details, see Extended Data Figure 3-2). b, PM EV (the difference between classifier evidence for the category of the PM target and the nontarget category) was computed for each participant at every difficulty level, and group data are shown in violin plots. PM intention evidence was highest at easy difficulties (dark red) and lowest for the most difficult levels (dark blue). c, The relationship between PM intention evidence and PM accuracy was computed using bootstrapped logistic regression (n = 10,000 iterations) for decreasing (red) and increasing (blue) trials. *p < 0.05.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Model comparisons using behavioral and neural metrics to predict PM performance. Model weights and R 2 values were computed across bootstrap iterations (n = 10,000) to test model differences. a, wAICs across bootstrap iterations for each model. b, Explanatory power of each model (R 2) shown as distributions across bootstraps. Medians are indicated by dashed gray lines.

Extended Data

  • Figures
  • Figure 1-1

    Average reaction time (y-axis left, blue) and accuracy (y-axis right, red) for pilot participants are plotted across each difficulty level of the ongoing visual search task. Reaction time increases and accuracy decreases from the easiest difficulty (1) to the hardest (14). The purpose of the first behavioral pilot study was to determine if the ongoing task could be parametrically modulated in a controlled manner. For this pilot study, participants (n = 15) performed the ongoing task in isolation. On each probe, participants indicated the absence or presence of the arrow target (rightward facing horizontal arrow) in a newly generated visual-search array (every 2 s) with a button press (left: absent; right: present; response deadline: 1.9 s). Target arrow location was counterbalanced between the top and bottom half of the screen. Non-target arrows appeared in set positions around the circular array, oriented within some distribution of angles determined by the current task difficulty setting. Participants sat approximately 18 in. away from the screen, and all 10 arrows, which were .64° by .22° in shape, were 3.18° away from the center of the screen. OG task difficulty was manipulated on each probe by adjusting two parameters controlling the orientation of the distractor arrows: their minimum similarity to the target and their similarity to other distractors. For distractor-to-target similarity, a minimum angular distance for distractors from the target (i.e., horizontal or 0°) was set to either 5, 15, 25, 45, or 65 degrees. For distractor-to-distractor similarity, the maximum variance from the minimum angular distance was set to either 10, 20, or 40 degrees. The factorial combination of these parameters (excluding any combination where minimum plus variance could exceed the 90° vertical plane) created 14 difficulty conditions. Participants performed three blocks of trials separated by short voluntary breaks for rest. Each block contained 14 mini-blocks comprised of 20 visual-search trials of one difficulty level. Difficulty level was pseudo-randomly selected, with the only limitation being that each of the 14 difficulty levels was selected exactly three times. As difficulty increased, accuracy decreased (F(13,182)=74.89, p<.001) and reaction time increased (F(13,182)=39.53, p<.001). Download Figure 1-1, DOCX file.

  • Figure 1-2

    The purpose of the second behavioral pilot study was to determine if PM strategy (as measured by PM cost) could be modulated by the difficulty of the ongoing task. The task design for this study was nearly identical to that used in the main experiment, but here OG task difficulty was held constant as either easy (difficulty level 4) or hard (difficulty level 12) for the entirety of each block and across each trial. Participants (n = 20) completed one block (15 trials per block) at each difficulty level. a) PM cost was calculated by subtracting the average non-PM trial OG RT from the average PM trial OG RT at each difficulty level for each participant. The distribution and median are represented. PM cost significantly varied as a function of OG task difficulty (F(1,19) = 35.63, p<.001), with cost being higher at the easier difficulty (M = 0.134 s (SE = 0.012)) than at the harder difficulty (M = 0.031 s (SE = 0.012)). * p<.05. b) PM accuracy was calculated at each difficulty for each participant. The distribution and median are represented. PM accuracy was equivalent across conditions (F(1,19) = 0.785, p = 0.387; easy = 71.0% (4.5%), hard = 64.5% (5.8%)). Download Figure 1-2, DOCX file.

  • Figure 2-1

    Visualization of model comparisons for by-trial estimates of PM strategy shifts. Here, we used AICc to evaluate the relative model fit between a linear, quadratic, and cubic relationship of PM cost over the course of each trial. The Akaike weight calculated from AICc scores from each model are shown in Figure 2-3a. We evaluated the lowest AICc for each trial, and then calculated the proportion of trials best fit by either a linear, quadratic, and cubic relationship for each participant (shown in Figure 2-3b). A 1st order polynomial (linear model) fit best for nearly all trials (mean = 93.43%, 95% CI = [92.62%, 94.24%]; Akaike weight = 0.873, 95% CI = [0.865, 0.880]), compared to 2nd order fits (mean = 5.72%, 95% CI = [4.99%, 6.45%]; Akaike weight = 0.111, 95% CI = [0.104, 0.119]), or 3rd order fits (mean = 0.85%, 95% CI = [0.68%, 1.02%]; Akaike weight = 0.016, 95% CI = [0.012, 0.019]). Download Figure 2-1, DOCX file.

  • Figure 2-2

    Additional model comparisons for trial-by-trial estimates of PM strategy shifts. One concern with fitting models on a by-trial basis is that noise may bias model selection towards the models with fewer parameters. In order to address this concern, we performed a less conservative AIC (without the small sample correction term) model selection analysis and a bootstrap analysis where polynomial fits were calculated for a random subsample of trials. For the AIC analysis, we performed the same regression analysis steps as detailed for the AICc analysis, but simply used the AIC estimation term instead of the AICc term. After calculating an AIC score for each trial, we then selected the lowest score between the 1st, 2nd, and 3rd order polynomial as the best for the trial. We then calculated the relative Akaike weight for each model on each trial and average that value for each model type for each participant. Across participants the average Akaike weights were similar between 1st and 3rd order polynomial fits (mean difference = 0.006 (SE = .010), t(77) = 0.65, p=.52; panel a). Significantly more trials for each participant were best fit by a 1st order than a 3rd order polynomial (mean difference = 20.5% (SE = 1.7%), t(77) = 12.014, p<.001; panel b). Another way to mitigate the impact of noise on model selection is to fit the model on more than a single trial at a time. To avoid averaging across all trials and still getting an estimate of model-fit reliability, we performed a bootstrap analysis. In this analysis, we first z-scored PM-cost values within each subject. Next, we combined all trials into one super-subject. On each bootstrap iteration, 50 trials of each of the five trial types (increasing starting easy, increasing starting middle, fixed, decreasing starting hard, decreasing starting easy) were randomly selected from the super-subject pool. Then, 1st, 2nd, and 3rd order polynomial models were fit to each trial type sample and the lowest AIC value was selected as the best-fit model type. We repeated this process 1000 times and found that a linear (1st order) polynomial fit a significantly greater number of these samples (57.7% of all trials), followed by a quadratic fit (2nd order, 29.1% of all trials), followed by a cubic fit (3rd order, 13.4% of all trials). This is a significantly greater number of trials selected to be best fit by a linear relationship than would be predicted by chance (χ2 (1, n=1000) = 364.06, p <.001). Download Figure 2-2, DOCX file.

  • Table 2-3

    Comparison of behavioral results from experiments 1 and 2. These data include the key results presented in Figure 2 separately for the behavioral-only participants, the neural participants, and the combined groups. Each analysis section of the table (A-F) corresponds to the same panel from Figure 2. Analyses of the relationship between trial type (PM/non-PM), Difficulty (1 to 15) and OG task accuracy, OG RT, and PM cost were carried out by first running a mixed-effect regression using the lme4 package in R of the interaction between trial type and difficulty and then a separate model comparing the main effects of difficulty and trial type without the interaction term. Random effects of individual slope and intercept were included in each regression. Within-subject ANOVAs were used to compare PM cost slope and PM accuracy across conditions. Download Table 2-3, DOCX file.

  • Table 3-1

    PM > Non-PM GLM Analysis. FSL’s FEAT was used to identify voxels that were more responsive on PM trials than on non-PM trials (cluster correction, p < .001). This table lists the coordinates and descriptors for all significant voxel clusters. Download Table 3-1, DOCX file.

  • Figure 3-2

    Surface searchlight analysis. Results from the surface-based searchlight classification analysis to decode the PM intention on PM trials. Vertices in red indicate those that survived threshold-free cluster enhancement significance testing (H0 mean = 50%, p<.001). Results indicate that classification was successful only in two particular posterior regions: the ventral temporal cortex and lateral occipital cortex. Notably absent from this map is the anterior lateral prefrontal cortex. To perform this analysis, anatomical surface outputs from Freesurfer recon-all were converted to AFNI/SUMA format using SUMA_Make_Spec_FS. Surfaces were remapped to a standard topology using MapIcosahedron and co-registered to a reference functional volume using align_epi_anat so that functional data could be masked by the surface volume. Voxels determined to not be part of the surface were masked out of the searchlight analysis. Surface searchlight analysis was performed in MATLAB using functions from the CosMoMVPA toolbox. Each searchlight sphere was determined by selecting the 100 closest vertices to a center vertex according to geodesic distance. L2-weighted logistic regression classifiers were trained on four categories and tested within each searchlight sphere using a k-fold cross validation procedure. Only the five main PM task blocks were used for this analysis, and data from the localizers were excluded. That meant that on each k-fold iteration, 4 out of 5 PM-task blocks were used for training the classifier, and one held out block was used for testing. Accuracy was then computed on face and scene probes. The face/scene accuracy across all five folds was averaged and that value was assigned to the center vertex of that sphere. Download Figure 3-2, DOCX file.

Back to top

In this issue

eneuro: 6 (6)
eNeuro
Vol. 6, Issue 6
November/December 2019
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Cognitive Flexibility Improves Memory for Delayed Intentions
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Cognitive Flexibility Improves Memory for Delayed Intentions
Seth R. Koslov, Arjun Mukerji, Katlyn R. Hedgpeth, Jarrod A. Lewis-Peacock
eNeuro 10 October 2019, 6 (6) ENEURO.0250-19.2019; DOI: 10.1523/ENEURO.0250-19.2019

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Cognitive Flexibility Improves Memory for Delayed Intentions
Seth R. Koslov, Arjun Mukerji, Katlyn R. Hedgpeth, Jarrod A. Lewis-Peacock
eNeuro 10 October 2019, 6 (6) ENEURO.0250-19.2019; DOI: 10.1523/ENEURO.0250-19.2019
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • cognitive control
  • cognitive flexibility
  • fMRI
  • MVPA
  • prospective memory
  • working memory

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

New Research

  • A Very Fast Time Scale of Human Motor Adaptation: Within Movement Adjustments of Internal Representations during Reaching
  • Optogenetic Activation of β-Endorphin Terminals in the Medial Preoptic Nucleus Regulates Female Sexual Receptivity
  • Hsc70 Ameliorates the Vesicle Recycling Defects Caused by Excess α-Synuclein at Synapses
Show more New Research

Cognition and Behavior

  • Visual Speech Reduces Cognitive Effort as Measured by EEG Theta Power and Pupil Dilation
  • A Progressive Ratio Task with Costly Resets Reveals Adaptive Effort-Delay Trade-Offs
  • Luminance Matching in Cognitive Pupillometry Is Not Enough: The Curious Case of Orientation
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.