Our eyes move to take in new information three or four times a second, and our understanding of the visual input seems to keep pace with this information flow. Eye fixation durations may be longer than the time required to perceive a scene, however, because they include time to encode the scene into memory and to plan and initiate the next saccade. Indeed, a picture as brief as 20 ms is easy to see if it is followed by a blank visual field (e.g., Thorpe, Fize, & Marlot, 1996). However, presenting another patterned stimulus after the target as a mask interferes with processing, particularly if the mask is another meaningful picture (Intraub, 1984; Loftus, Hanna, & Lester, 1988; Loschky, Hansen, Sethi, & Pydimarri, 2010; Potter, 1976). With rapid serial visual presentation (RSVP) of colored photographs of diverse scenes, each picture masks the preceding one, so only the last picture is not masked. Nonetheless, viewers can detect a picture presented for 125 ms in an RSVP sequence when they have only been given a name for the target, such as picnic or harbor with boats (Intraub, 1981; Potter, 1975, 1976; Potter, Staub, Rado, & O’Connor, 2002). Here, we tested the limits of viewers’ detection ability by asking them to look for or recall named targets in sequences of six (Exp. 1) or 12 (Exp. 2) pictures that they had never seen before, presented for durations between 13 and 80 ms per picture.

One reason for using such short durations was to investigate the possibility that the visual system has been configured by experience to process scene stimuli directly to an abstract conceptual level, such as “a picnic.” In feedforward computational models of the visual system (Serre et al., 2007a; Serre, Oliva, & Poggio, 2007b), the units that process a visual stimulus are hierarchically arranged: Units representing small regions of space (receptive fields) in the retina converge to represent larger and larger receptive fields and increasingly abstract information along a series of pathways from V1 to inferotemporal cortex, and higher to the prefrontal cortex. A lifetime of visual experience is thought to tune this hierarchical structure, which acts as a filter that permits the categorization of objects and scenes with a single, forward pass of processing. In this model, even a very brief, masked presentation might be sufficient for understanding a picture.

A widely accepted theory of vision, however, is that perception results from a combination of feedforward and feedback connections, with initial feedforward activation generating possible interpretations that are fed back and compared with lower levels of processing for confirmation, establishing reentrant loops (Di Lollo, 2012; Di Lollo, Enns, & Rensink, 2000; Enns & Di Lollo, 2000; Hochstein & Ahissar, 2002; Lamme & Roelfsema, 2000). Such loops produce sustained activation that enhances memory. It has been proposed that we become consciously aware of what we are seeing only when such reentrant loops have been established (Lamme, 2006). A related suggestion is that consciousness arises from “recurrent long-distance interactions among distributed thalamo-cortical regions” (Del Cul, Baillet, & Dehaene, 2007, p. 2408). This network is ignited as reentrant loops in the visual system are formed (Dehaene, Kergsberg, & Changeux, 1998; Dehaene & Naccache, 2001; Dehaene, Sergent, & Changeux, 2003; see also Tononi & Koch, 2008). It has been estimated that reentrant loops connecting several levels in the visual system would take at least 50 ms to make a round trip, which would be consistent with stimulus onset asymmetries (SOAs) that typically produce backward masking.

Thus, when people view stimuli for 50 ms or less with backward pattern masking, as in some conditions in the present study, the observer may have too little time for reentrant loops to be established between higher and lower levels of the visual hierarchy before earlier stages of processing are interrupted by the subsequent mask (Kovacs, Vogels, & Orban, 1995; Macknik & Martinez-Conde, 2007). In that case, successful perception would primarily result from the forward pass of neural activity from the retina through the visual system (DiCarlo, Zoccolan, & Rust, 2012; Hung, Kreiman, Poggio, & DiCarlo, 2005; Perrett, Hietanen, Oram, & Benson, 1992; Thorpe & Fabre-Thorpe, 2001). In support of the possibility of feedforward comprehension, Liu, Agam, Madsen, and Kreiman (2009) were able to decode object category information from human visual areas within 100 ms after stimulus presentation.

An open question is what level of understanding is achieved in the initial forward wave of processing. One behavioral approach to assessing how much is understood in the feedforward sweep is to measure the shortest time to make a discriminative response to a stimulus. Studies by Thorpe, Fabre-Thorpe, and their colleagues (see the reviews by Thorpe & Fabre-Thorpe, 2001, and Fabre-Thorpe, 2011) required participants to make a go/no-go response to the presence of a category such as animals (or vehicles, or faces) in photographs presented for 20 ms without a mask. They found that differential electroencephalographic activity for targets began about 150 ms after presentation. The shortest above-chance reaction times (which would include motor response time) were under 300 ms. Choice saccades to a face in one of two pictures were even faster, as short as 100 ms (Crouzet, Kirchner, & Thorpe, 2010). In another study (Bacon-Macé, Kirchner, Fabre-Thorpe, & Thorpe, 2007), pictures with or without animals were followed by texture masks at SOAs between 6 and 107 ms; animal detection was at chance at 6 ms, but above chance starting at 12 ms, with performance approaching an asymptote at 44 ms. These times suggested to the investigators that the observers were relying on feedforward activity, at least for their fastest responses.

A different question from that of feedback is whether a selective set or expectation can modify or resonate with the feedforward process (Llinás, Ribary, Contreras, & Pedroarena, 1998), obviating the need for reentrant processing and enabling conscious perception with presentation durations shorter than 50 ms. It is well known that advance information about a target improves detection. For example, in a recent study by Evans, Horowitz, and Wolfe (2011), participants viewed a picture for 20 ms, preceded and followed by texture masks, and judged whether they saw a specified target (e.g., animal, beach, or street scene). Accuracy was consistently higher when the target was specified just before the stimulus was presented, rather than just after. Numerous studies have shown that selective attention has effects on the visual system in advance of expected stimuli (e.g., Cukur, Nishimoto, Huth, & Gallant, 2013). For example, in a recent study using multivoxel pattern analysis (Peelen & Kastner, 2011), the amount of preparatory activity in object-selective cortex that resembled activity when viewing objects in a given category was correlated with successful item detection.

To evaluate the effect of attentional set on target detection, in the present experiments we compared two conditions between groups. In one group, the target was named just before the sequence (providing a specific attentional set), and in the other, the target was named just after the sequence (with no advance attentional set). In the latter case, the participant had to rely on memory to detect the target. Previous studies of memory for pictures presented in rapid sequences have shown that memory is poor with durations in the range of 100–200 ms per picture (Potter & Levy, 1969; Potter et al., 2002). Given these results, and the known benefits of advance information already mentioned, we expected that advance information would improve performance; the question was whether it would interact with duration, such that detection of the target would be impossible at the shorter durations without advance information. Such a result would conflict with the hypothesis that feedforward processing, without reentrance and without top-down information, is sufficient for conscious identification.

Although both the feedforward and feedback models predict that performance will improve with presentation time, the main questions addressed here were whether knowing the identity of the target ahead of time would be necessary for successful detection, particularly at high rates of presentation, and whether we would observe a sharp discontinuity in performance as the duration of the images was reduced below 50 ms, as is predicted by reentrant models. A seminal earlier study (Keysers, Xiao, Földiák, & Perrett, 2001; see also Keysers et al., 2005) showed successful detection using RSVP at a duration as short as 14 ms, but the target was cued by showing the picture itself, and pictures were reused for a single participant so that they became familiar. In the present experiments, by specifying the target with a name and by using multiple pictures in each sequence that the participants had never seen before, we forced participants to identify the target at a conceptual level rather than simply matching specific visual features.

Experiment 1

Method

Procedure

Two groups of participants viewed an RSVP sequence of six pictures presented for 13, 27, 53, or 80 ms per picture and tried to detect a target specified by a written name (see Fig. 1). The one-to-four-word name reflected the gist of the picture as judged by the experimenters. Examples are swan, traffic jam, boxes of vegetables, children holding hands, boat out of water, campfire, bear catching fish, and narrow street. For those in the before group, each trial began with a fixation cross for 500 ms, followed by the name of the target for 700 ms, and then a blank screen of 200 ms and the sequence of pictures. A blank screen of 200 ms also followed the sequence, and then the question “Yes or No?” appeared and remained in view until the participant pressed “Y” or “N” on the keyboard to report whether he or she had seen the target. Those in the after group viewed a fixation cross for 500 ms at the beginning of the trial, followed by a blank screen of 200 ms and the sequence. At the end of the sequence, another 200-ms blank screen appeared, and then the name was presented simultaneously with the yes/no question until the participant responded.

Fig. 1
figure 1

Illustration of a trial in Experiment 1. The target name appeared either 900 ms before the first picture or 200 ms after the last picture and the two forced-choice pictures appeared after the participant responded “yes” or “no”

On trials in which the target had been presented, the participant’s response was followed by a two-alternative forced choice between two pictures that matched the target name. The participant pressed the “G” or “J” key to indicate whether the left or right picture, respectively, had been presented. On no-target trials, the words “No target” appeared instead of the pair of pictures.

Participants

The 32 participants (17 women, 15 men) were volunteers 18–35 years of age who were paid for their participation. All signed a consent form approved by the MIT Committee on the Use of Humans as Experimental Subjects. Participants were replaced if they made more than 50% false “yes” responses, overall, on nontarget trials, because such a high false alarm rate suggested that the participant was not following instructions, but randomly guessing. One participant was replaced in the before group, and three in the after group.

Materials

The stimuli were color photographs of scenes. The pictures were new to the participants, and each picture was presented only once. For the targets, two pictures were selected that matched each target name; which one appeared in the sequence was determined randomly. The other picture was used as a foil in the forced choice test after each target-present trial. The pictures were taken from the Web and from other collections of pictures available for research use. They included a wide diversity of subject matter: indoor and outdoor, both with and without people. The pictures were resized to 300 × 200 pixels and were presented in the center of the monitor on a gray background. The horizontal visual angle was 10.3º at the normal viewing distance of 50 cm. For the forced choice, two 300 × 200 pixel pictures were presented side by side.

Design

A practice block was presented at 133 ms per picture, followed by eight experimental blocks of trials. Across blocks, the durations were 80, 53, 27, and 13 ms per picture, repeated in the next four blocks. Each block contained 20 trials, including five no-target trials. The target, which was never the first or last picture, appeared in Serial Position 2, 3, 4, or 5, balanced over target trials within each block. Across every eight participants, the eight blocks of trials were rotated so that the pictures in each block of trials were seen equally often at each duration and in each half of the experiment.

Apparatus

The experiment was programmed with MATLAB 7.5.0 and the Psychophysics Toolbox extension (Brainard, 1997) version 3, and was run on a Mac mini with a 2.4-GHz Intel Core 2 Duo processor. The Apple 17-in. CRT monitor was set to a 1,024 × 768 resolution with a 75-Hz refresh rate. The room was normally illuminated. Timing errors sometimes occur in RSVP sequences (McKeeff, Remus, & Tong, 2007). Precision was controlled by using Wyble’s Stream package for MATLAB. We checked the actual timing on each refresh cycle in each of the groups, and excluded trials in which a timing error of ±12 ms (equivalent to a single refresh of the monitor) or greater affected the target picture or the pictures immediately before and after the target. Since the timing errors were random, they increased noise in the data but did not produce any systematic bias. In Experiment 1, an average of 22% of the target trials were removed in the name-before group, and 10% were removed in the name-after group. In Experiment 2, timing errors occurred on fewer than 1% of the trials.

Analyses

Repeated measures analyses of variance (ANOVAs) were carried out on individual participants’ d' measures, as a function of beforeafter group and presentation duration (80, 53, 27, or 13 ms per picture). Planned paired t tests at each duration, separated for each group, compared d' with chance (0.0). Serial position effects were analyzed for the proportions of hits on target-present trials (since there was no way to estimate false “yes”es as a function of serial position, we did not use d'). Separate ANOVAs were carried out on the accuracy of the forced choice responses on target-present trials, conditional on whether the participant had responded “yes” (a hit) or “no” (a miss).

Results and discussion

The results are shown in Fig. 2. For the d' ANOVA of yes–no responses (Fig. 2A), we found main effects of name position, F(1, 30) = 4.792, p < .05, η G 2 = .066, and duration, F(3, 90) = 38.03, p < .001, η G 2 = .414, as well as an interaction, F(3, 90) = 7.942, p < .001, η G 2 = .129. As Fig. 2 shows, having the target name presented before rather than after the sequence benefited detection substantially at 80 ms, but not at all at 13 ms, with the other durations falling in between. Detection improved as the duration increased from 13 to 80 ms. Separate paired t tests, two-tailed, showed that d' was significantly above chance (p < .001) at each duration in each group. For the name-before group at 13 ms, t(15) = 4.45, p < .001, SEM = 0.162; the significance of the difference increased at each of the other durations. For the name-after group at 13 ms, t(15) = 7.91, p < .0001, SEM = 0.139, and at 27 ms, t(15) = 5.60, p < .0001, SEM = 0.122; the significance of the difference increased at the other two durations.

Fig. 2
figure 2

Results of Experiment 1, in which participants detected a picture that matched a name given before or after the sequence of six images (N = 16 in each group). Error bars depict the standard errors of the means. (A) d' results for yes–no responses. (B) Proportions correct on a two-alternative forced choice between two pictures with the same name on target-present trials, conditional on whether the participant had reported “yes” in the detection task (labeled “Hit”) or “no” (“Miss”). Chance = .5

In an ANOVA of the effect of the serial position of the target on the proportions of correct “yes” responses, the main effect of serial position was significant, F(3, 90) = 4.417, p < .01, η G 2 = .023. The means were .71, .71, .69, and .75 for serial positions 2, 3, 4, and 5, respectively, suggesting a small recency effect. A marginal interaction with name position, F(3, 90) = 2.702, p = .05, η G 2 = .014, indicated that this effect was larger when the name came after the sequence. This was confirmed by separate analyses of serial position in the before and after groups: Serial position was not significant in the before group (p = .095), but was in the after group, F(3, 45) = 11.23, p < .001, η G 2 = .073, for which the means were .67, .69, .67, and .75.

An ANOVA of the two-alternative forced choice results on target-present trials (Fig. 2B) showed that accuracy was high (M = .73) when participants had reported “yes” to the target (a hit), but at chance (M = .52) when they had reported “no” (a miss), F(1, 30) = 57.92, p < .001, η G 2 = .253. The main effect of group (before/after) was significant, F(1, 30) = 6.70, p < .05, η G 2 = .018, and interacted with whether the response had been “yes” or “no,” F(1, 30) = 4.63, p < .05, η G 2 = .026. When participants reported having seen the target, forced choice accuracy was relatively better in the before than in the after condition, although both were above chance. When the target was missed, both groups were close to chance. We found a main effect of duration, F(3, 90) = 3.76, p < .05, η G 2 = .048, and no other significant interactions.

The main findings of Experiment 1 are that viewers can detect and retain information about named targets that they have never seen before at an RSVP duration as short as 13 ms, and that they can do so even when they have no information about the target until after presentation. Furthermore, no clear discontinuity in performance emerged as duration was decreased from 80 to 13 ms. If reentrant feedback from higher to lower levels played a necessary role in extracting conceptual information from an image, one would expect an inability to detect any targets at 27 and 13 ms, even in the before condition, contrary to what we observed. If advance information about the target resonated or interacted with the incoming stimuli, accounting for successful performance at 27 and 13 ms without reentrance, then performance should have been near chance at those durations in the after condition, again contrary to the results. A feedforward account of detection is more consistent with the results, suggesting that a presentation as short as 13 ms, even when masked by following pictures, is sufficient on some trials for feedforward activation to reach a conceptual level, without selective attention.

Experiment 2

One question about the results of Experiment 1 was whether they would generalize to sequences longer than six pictures. Given that targets were limited to only four serial positions (excluding the first and last pictures), could participants have focused on just those pictures, maintaining one or more of them in working memory to compare subsequently to the target name? In that case, increasing the number of pictures to 12 (in Exp. 2) should markedly reduce the proportion detected, at least in the name-after condition.

Method

The method was the same as that of Experiment 1, except as noted.

Participants

The 32 participants (22 women, 10 men) were volunteers 18–35 years of age, most of whom were college students; none had participated in Experiment 1. They were paid for their participation. All signed a consent form approved by the MIT Committee on the Use of Humans as Experimental Subjects. Participants were replaced if they made more than 50% false “yes” responses, overall, on nontarget trials. No participant was replaced in the before group, and four were replaced in the after group.Footnote 1

Design

The design was like that of Experiment 1, with two groups of participants, one with the target presented before the sequence, the other with the target presented after. The main difference was that trials consisted of 12 rather than six pictures. To make the 12-picture sequences, two 6-picture sequences from Experiment 1 were combined by randomly pairing the trials in a given block, with the restriction that the two targets in a pair were in the same serial position (2, 3, 4, or 5; after combination, the two potential targets were in Serial Positions 2 and 8, or 3 and 9, etc.). To end up with an even number of six-item sequences, we generated two new six-picture trials per block, one with a target and one without. Each block contained 11 trials, eight with targets and three without. Each of the eight target serial positions was represented once per block. Which of the two target names was used was counterbalanced between subjects within each group. Across participants within a group, the eight blocks of trials were rotated so that the pictures in each block of trials were seen equally often at each duration and in each half of the experiment.

Results and discussion

The results are shown in Fig. 3. The main results were similar to those of Experiment 1. In the d' analysis of the yes–no responses, we found main effects of whether the name was given before or after, F(1, 30) = 8.785, p < .01, η G 2 = .083, and duration, F(3, 90) = 28.67, p < .001, η G 2 = .397. Detection was more accurate when the name was given before the sequence rather than after, and it improved as the duration increased from 13 to 80 ms. The interaction was not significant (p = .22). Separate paired t tests, two-tailed, showed that d' was significantly above chance (p < .02) at each duration in each group. For the name-before group at 13 ms, t(15) = 3.28, p < .01, SEM = 0.152; the significance of the difference increased at each of the other durations. For the name-after group at 13 ms, t(15) = 2.83, p < .02, SEM = 0.155; the significance of the difference again increased at each of the other durations.

Fig. 3
figure 3

Results of Experiment 2, in which participants detected a picture that matched a name given before or after a sequence of 12 images (N = 16 in each group). Error bars depict the standard errors of the means. (A) d' results for yes–no responses. (B) Proportions correct on a two-alternative forced choice between two pictures with the same name on target-present trials, conditional on whether the participant had reported “yes” in the detection task (labeled “Hit”) or “no” (“Miss”). Chance = .5

In an ANOVA of the effect of the serial position of the target on the proportions of correct “yes” responses, the main effect of serial position was significant, F(7, 210) = 5.20, p < .001, η G 2 = .032. The means were .57, .54, .66, .76, .66, .63, .64, and .62 for Serial Positions 2, 3, 4, 5, 8, 9, 10, and 11, respectively, suggesting a slight disadvantage for primacy, but no recency benefit. We found no interactions.

An ANOVA of the two-alternative forced choice results on target-present trials (Fig. 2B) showed that accuracy was fairly high (M = .67) when participants had reported “yes” to the target (a hit) but was near chance (M = .52) when they had reported “no” (a miss), F(1, 30) = 20.61, p < .001, η G 2 = .122. The main effect of group (before/after) was not significant, F(1, 30) = 2.34, p = .136, η G 2 = .018, but a marginal interaction did emerge with whether the response had been “yes” or “no,” F(1, 30) = 2.88, p = .10, η G 2 = .019. As in Experiment 1, having the name before was only better than having the name after when the participant reported having seen the target; when the trial was a miss, both groups were close to chance. We found no main effect of duration, F(3, 90) = 1.35, but did find an interaction with hit/miss, F(3, 90) = 6.43, p < .01, η G 2 = .064: As can be seen in Fig. 3B, the hit benefit was larger at longer durations.

Altogether, the results of Experiment 2 replicated the main results of Experiment 1, but now with 12 pictures per sequence rather than six (see Fig. 4). An ANOVA compared the d' results of the two experiments. Performance (as d') was significantly higher with six-picture sequences (M = 1.33) than with 12-picture sequences (M = 1.06), F(1, 60) = 9.83, p < .01, η G 2 = .057. No interactions with experiment were significant.

Fig. 4
figure 4

Comparison of the d' results of Experiment 1 (six pictures) and Experiment 2 (12 pictures). Error bars depict the standard errors of the means

Clearly, we can reject the hypothesis that participants could encode only two or three pictures in working memory; otherwise, performance would have fallen more dramatically in Experiment 2, especially in the after condition, in which participants had to retain information about the pictures for later retrieval.

The results also demonstrate that a specific attentional expectation is not required for extracting conceptual information from a stream of pictures: Performance remained substantially above chance at all durations when the target was specified after the sequence. The forced choice results indicate, however, that visual details were lost at the two shorter durations with 12 pictures to retain, even when the target was correctly reported. In the after condition, however, the forced choice test was slightly delayed relative to the before condition, because the participants had to read the name of the target and scan their memory of the sequence before making a “yes” or “no” response. This intervening processing may account for the somewhat reduced performance in the forced choice task in both Experiments 1 and 2 when the target name followed the sequence.

General discussion

The results of both experiments show that conceptual understanding can be achieved when a novel picture is presented as briefly as 13 ms and masked by other pictures. Even when participants were not given the target name until after they had viewed the entire sequence of six or 12 pictures, their performance was above chance even at 13 ms, indicating that a top-down attentional set is not required in order to rapidly extract and at least briefly retain conceptual content from an RSVP stream. The numbers of pictures in the sequence and their serial positions had little effect on performance, suggesting that pictures were processed immediately rather than accumulating in a limited-capacity memory buffer for subsequent processing. This pattern of results supports the hypothesis that feedforward processing is sufficient for the conceptual comprehension of pictures.

As expected, detection was more accurate, the longer the duration per picture. However, it was striking that no sharp drop in detection was apparent at or below a duration of 50 ms, contrary to the predictions of feedback or reentrant models of conscious perception (e.g., Del Cul et al., 2007; Lamme, 2006). Instead, performance declined gradually with shorter durations, but remained well above chance at 13 ms. Moreover, when viewers reported that they had detected the target, they were usually above chance in selecting it in a forced choice between two pictures, both of which fit the target name: That is, they remembered more about the picture than simply the concept provided by the target name. When they had not detected the target, their forced choice was near chance, suggesting that the visual features of unidentified pictures were not retained.

Although the present behavioral results cannot entirely rule out feedback, they do present a challenge to existing reentrant models. They also raise a further question: How can conceptual understanding persist long enough to be matched to a name presented 200 ms after the offset of the final masking picture, given that the target might have been any of the six or 12 pictures just viewed? The answer to this question may lie in the carwash metaphor of visual processing (Moore & Wolfe, 2001; Wolfe, 2003), in which each stimulus is passed from one level of processing to the next. In such a model, multiple stimuli can be in the processing pipeline at once. At the end of this pipeline, the stimuli, having now been processed to the level of concept, may persist in local recurrent networks that sustain activation for several pictures in parallel, at least briefly. In such a model, concepts are presumably represented in a multidimensional, sparsely populated network in which visual masks may not be effective if they are not also conceptually similar to the item being masked. The finding that a forced choice between two conceptually equivalent pictures was above chance only if the participant correctly detected the target is consistent with the conjecture that when feedforward processing does not reach a conceptual level, lower levels of representation are already masked, and no featural information can be accessed.

The finding that observers can perceive and comprehend conceptual information from such brief images extends previous evidence that a purely feedforward mode of processing is capable of decoding complex information in a novel image (e.g., DiCarlo et al., 2012; Serre, et al., 2007a; Thorpe et al., 1996). Feedforward models are consistent with a range of neural results. For example, in a study by Keysers et al. (2001), recordings were made of individual neurons in the cortex of the anterior superior temporal sulcus (STSa) of monkeys as they viewed continuous RSVP sequences of pictures; the monkeys’ only task was to fixate the screen. Neurons in STSa that were shown to respond selectively to a given picture at a relatively slow presentation rate of 200 ms per picture also responded selectively (although not as strongly) to the same picture at presentations as short as 14 ms per image.

The present behavioral results suggest that feedforward processing is capable of activating the conceptual identity of a picture, even when reentrant processing has presumably been blocked because the picture is presented briefly and is then masked by immediately following pictures. Since participants were capable of reporting the presence of a target under these conditions, the results strongly suggest that reentrant processing is not always necessary for conscious processing. They are consistent with the possibility, however, that reentrant loops facilitate processing and may be essential to comprehending the details of an image. For example, a rapid but coarse first pass of low-spatial-frequency information may provide global category information that is subsequently refined by reentrant processing (e.g., Bar et al., 2006). Work with monkeys has shown that neurons that are selective for particular faces at a latency of about 90 ms give additional information about facial features beginning about 50 ms later (Sugase, Yamane, Ueno, & Kawano, 1999). Reentrant processing therefore might be involved after an initial feedforward pass (Di Lollo, 2012).

The present findings can be contrasted with those of masked-priming studies in which the prime is not consciously seen, although it has an effect on the response to an immediately following stimulus. In a typical experiment, a brief presentation of a word in the range of 25–60 ms—the prime—is followed by a second, unmasked word, to which the participant must respond (Dehaene et al., 2001; Forster & Davis, 1984). If the prime is identical or related to the target word, the response to the latter is faster and more accurate than with no prime or an unrelated prime, even when the prime is not consciously identified. In such studies, the participant’s focus of attention is on the final word, whose longer duration permits it to receive full, recurrent processing that might block awareness of the more vulnerable information from the prime that was extracted during the feedforward sweep. In the present experiments, in contrast, the masking stimuli were of the same duration as the preceding target stimulus, and all were potential targets. In these conditions, even durations as short as 13 ms are clearly sufficient, on a significant proportion of trials, to drive conscious detection, identification, and immediate recognition memory.

Finally, perhaps our most striking finding is that performance was surprisingly good, even when the target name was given only after the sequence. It has long been assumed that the detection of rapidly presented targets in an RSVP stream (e.g., Potter, 1976) is possible only because the participants had the opportunity to configure their attentional filters in advance (e.g., Peelen & Kastner, 2011). Indeed, Potter (1976) found that the ability to detect an image named in advance was much greater than the ability to recognize pictures later in a yes–no test of all of the pictures mixed with distractors. Other research (e.g., Potter et al., 2002) has indicated that the memory traces generated by RSVP are fragile and are rapidly degraded by successive recognition testing. When participants are given an immediate recognition test of just one item, however, the present results show that they are able to detect it in their decaying memory trace at a level of accuracy not far from the accuracy when the target was prespecified at the start of the trial. This result is consistent with the idea that a single forward sweep as short as 13 ms is capable of extracting a picture’s conceptual meaning without advance knowledge. Moreover, the pictures’ conceptual identities can be maintained briefly, enabling one to be matched to a name presented after the sequence.

A possible role for such rapid visual understanding in normal experience would be to provide nearly instantaneous conceptual activation that enables immediate action when necessary, without waiting to refine understanding by reentrant processing or by the kind of conscious reflection that requires a stable recurrent network.