Learn about the flu shot, COVID-19 vaccine, and our masking policy »
New to MyHealth?
Manage Your Care From Anywhere.
Access your health information from any device with MyHealth. You can message your clinic, view lab results, schedule an appointment, and pay your bill.
ALREADY HAVE AN ACCESS CODE?
DON'T HAVE AN ACCESS CODE?
NEED MORE DETAILS?
MyHealth for Mobile
Get the iPhone MyHealth app »
Get the Android MyHealth app »
Abstract
The human visual system is constantly confronted with an overwhelming amount of information, only a subset of which can be processed in complete detail. Attention and implicit learning are two important mechanisms that optimize vision. This study addressed the relationship between these two mechanisms. Specifically we asked, Is implicit learning of spatial context affected by the amount of working memory load devoted to an irrelevant task? We tested observers in visual search tasks where search displays occasionally repeated. Observers became faster when searching repeated displays than unrepeated ones, showing contextual cuing. We found that the size of contextual cuing was unaffected by whether observers learned repeated displays under unitary attention or when their attention was divided using working memory manipulations. These results held when working memory was loaded by colors, dot patterns, individual dot locations, or multiple potential targets. We conclude that spatial context learning is robust to interference from manipulations that limit the availability of attention and working memory.
View details for DOI 10.1037/a0020558
View details for Web of Science ID 000285272500003
View details for PubMedID 20853996
View details for PubMedCentralID PMC2998575