Visual Search Lab Class

From Psy241wiki

Jump to: navigation, search
         Write up of the visual search lab class in week 6
         Feedback was provided daily (now, on demand) on the grade the current version warrants, so you can compare and contrast better and worse write-ups
         Current grade = borderline 2:1 (although most sections are clear 2:1 quality, the fact that the results
         of mean score for condition by number of targets are missing from the results would mean that many markers might hesitate to give this a 2:1). 
         Compare previous versions 

Please edit individual sections, rather than the whole page, to avoid conflicts



         Good - a simple description of the experiment

The effect of automaticity on processing speeds in visual search tasks


         This is ok, clear and focussed on the right things: what the theories were, and what you did
         The final lines are the right topic - an explicit interpretation of what the theorertical implications are of the results - but it isn't as clear as is could.
         Notice how the description of the conditions has been improved by making it very explicit
           - e.g. 'a varied mapping condition (looking for a target letter(s) in an array of distractor stimuli which were also letters)'
         Good use abbreviations (CM and VM)
         Don't be afraid of being wordy on important elements such as definitions
         If you have a contradiction between prediction and your results, don't worry. 
         Say what you found and what you predicted, and discuss the discrepancy in the discussion
         (so, no, the last bit isn't too much for the abstract)

An investigation was undertaken to test Schneider & Shiffrin's theory of automatic processing (1977). Schneider and Shiffrin adapted Sternberg's (1960) high-speed memory scanning paradigm to identify whether a greater number of distractors present increased the time it took participants to look for a target. Expanding on Sternberg's findings, they also looked at whether the type of stimuli participants had to search for (looking for digits or letters) would affect their processing and therefore their response times. To investigate this, subjects took part in a between-participants experimental design and were either assigned to a varied mapping condition (VM, looking for a target letter(s) in an array of distractor stimuli which were also letters) or a consistent mapping condition (CM, looking for a target digit(s) in an array of distractor stimuli that were letters). It was found that response times were slower when more potential targets had to be searched for, supporting Sternberg's finding. Furthermore, response times were significantly quicker for those in the CM condition. Interestingly, there was no significant effect found between condition and task. - contradiction! This suggests the likelihood of automatic processing occurring during a task is higher not only for when there are fewer targets to look for, but when the methods for finding the targets involve skills that come easily to the participant. (In this case, one of the main skills that only the consistent mapping condition required was the separation of numbers from letters. Conventionally, we perceive numbers to be different from letters. This is a skill that has been practiced through reading, one that we use during most of our waking hours.) "consistent mapping condition (CM, looking for a target digit(s) in an array of distractor stimuli that were letters)" Should not CM mean that the targets are letters in an array of letters?


No more than one side


         Good. You make explicit why these hypotheses stem from the theory. 
         If you have multiple predictions (as you do here), it makes sense to make clear which stem from which theory, and which is most surprising/interesting
         Small note: the expected relationship between RT and targets is not an inverse one (because bigger RTs are associated with harder tasks)

Sternberg's findings suggested that the more targets to be searched for, the slower the participants' response times. Based on these findings, we also predict a positive relationship between the number of targets to find and response times.

Furthermore, Schneider & Shiffrin found that those in the CM condition tend to have an overall quicker response than those in the VM condition. We also predict that when the task involves more familiar skills, such as the separation of digits from letters, then this will decrease response times.

Finally, Schneider & Shiffrin found that, even when the number of targets to find in the CM condition was at its maximum, and in the VM condition at its minimum, the former condition still produced quicker response times than the latter. Another hypotheses, therefore, is that there will be significant interaction between number-of-targets and condition; that the effect of number-of-targets is greater under the VM condition than under the CM condition.



         This is almost perfect. Brief and to the point. 
         Psychologists will, in general, be interested in how you did your random allocation (since this is often a way biases creep into an experiment)
         This section also anticipates that the average reader will wonder why there are so many more people in condition 1 than condition 2. 
         Providing details, even if the answer is that the method wasn't very systemmatic (as in this case), is always the best option

Overall, there were 172 participants: 103 in the CM condition and 69 in the VM condition, with the CM condition having more participants as a result of an unsystematic method. All participants were second year Psychology undergraduates at the University of Sheffield. Gender and age were not recorded because we were not looking for any effects of these variables. However, the majority of these students were of conventional university-attending age (age 19-21), and the majority of these students were female. Participants were an opportunity sample, recruited through a module class.

Participants were randomly allocated to one of the two conditions. Participants were allocated according to where they were seated in the room: Participants near the front were placed under the VM condition while participants near the back were placed under the CM condition. There were more people sitting further back in the room which may have resulted in a larger sample size in the CM condition compared to the VM condition.


       Try to make the labels and terms consistent throughout the report
       Some of this isn't clear, with essential information missing (e.g. the number of trials)
       It is better to use paragraphs and sentences than bulletpoints
       Good points include : mentioning the two factors (and whether they were within or between), how the response recorded
       You don't need to mention the SPSS version number - we assume that the stats would be the same whatever version you used.
       It is better to mention the materials used at the same time as how they were used.
       Standard items (tables and chairs) don't need to be explicitly spelt out. 
       For example you might say 'participants sat in front of a PC, which displayed the stimuli'
       Small point: 2 groups, not 3, surely?

All participants undertook the task on a university computer, in the computer room, within the Psychology building. Participants were placed in one of two groups due to space limitation.

Materials used include:

1) Computer. Participants sat in front of a PC in the PC laboratory, which displayed the stimuli.

2) Mat lab used to present stimuli.

In each trial, the participants viewed slides which informed them of the target number for that particular set of stimuli. During the experimental condition, a slide showed a set of distractors from which the target had to be recognised. Once participants saw the target, they had to press the spacebar button as quickly as possible, which would then initiate a response and tell them that they found a target or that they hadn't on that particular trial. There were up to four slides of information for each trial, and the position of the target within them changed every time. The target stimuli varied depending on the condition. The CM condition involved finding a target digit in a random set of letters, and the VM condition involved finding a target letter in a set of letter distractors. The number of targets to be found was manipulated in both conditions so that participants either had to remember one, two or four target digits. At the end of the experiment, Mat lab gave out three individual scores and three average scores for each participant for the 1, 2 and 4 target number trials.


      Good to start by mentioning the design, making it explicit which factors were within subjects and which between.
      I would prefer to say 'number of targets' varied within subjects, as this is full and explicit.
      After introducing the design it is good to follow up by mentioning the factors, and the levels (define these)
      Then the DV(s) 

The experiment was a mixed measures design, whereby the number of targets varied within subjects, and the type of target varied between subjects. Thus each participant took part in only one condition that required searching for a particular type of target, but they were also asked to seek a different number of targets.

There were two factors in this study: the number of potential targets and the type of potential target. The number of potential targets had three levels, which were one, two, and four, while the type of potential target had two levels, letters or digits, both of which involved letter distractors. The dependent variable of the study was the mean response times.


Sufficient detail for someone else to replicate


         Did you test the DVs for normality? How can a violation of this assumption be dealt with?
         If it was me I'd make sure that the reader knew that I knew that the ANOVA was a 1-within, 1-between ANOVA, but 'mixed' is acceptable
         Ideally you would show a graph, but I know you can't upload figures to the wiki, so a table will have to do (showing mean RT in each of the six conditions)
         Channel capacity would also allow a better discussion, which speaks to the research mentioned in the introduction

A 2 way (mixed) ANOVA was carried out on the data. The assumption of spehricity was violated and therefore a Greenhouse-Geisser correction was used. Results showed that there was a significant main effect of condition (F(1,170)=15.8, p<.001) on participant's reaction times in responding to the targets. Participants in condtion 1 (M=.878, SE=.01) were overall slower than those in condtion 2 (M=.818, SE=.012). There was also a significant main effect of target (F(2,259)=204.7, p<.001) on particpant's reaction times, with participants being slower when they had to search for four targets (M=.969, SE=.013) than for two (M=.853, SE=.009) and one target (M=.723, SE=.009). As figure 1 shows, there was no significant interaction of target and condition (F(2,259)=.919, p=.377).

Present the descriptive statistics (mean latency + sd) for each condition, followed by the inferential statistical. Then calculate the ‘channel capacity’ for Conditions 1 and 2


       1st para: You need to be very careful invoking the concept of attention as an explanation in a study of attention: 
       --- does saying the task requires 'more attention' explain or just describe the result?
       Other sentences in this paragraph need rewriting to be clearer
       2nd para: good to link back the study to general theory
       3rd para: now neuroscience - great stuff, integrating different kinds of methods/perspectives on the topic
       4th para: limitations, and qualifications. The discussion of order effects isn't quite clear. 
          --- did ppts do the conditions in a fixed order? Why would a "less effort" effect apply in this expt but not the original?

Was the hypothesis supported? If not, why not? Ways the experiment might be improved. Any other relevant observations.

The findings support the first two experimental hypotheses: that response time would be slower for those in condition 1 than in condition 2, and that response time would be slower when there were more targets to search for. This is because when a task requires more attention, it involves controlled processing, and is therefore more likely to be serially processed (Schneider & Shiffrin, 1977). Serial processing will take longer because stimuli has to be processed in turn. Furthermore, controlled processing has been found to occur more so during a novel situation (learning something new). It may be that condition 2 had a quicker response because participants have learnt to separate digits from letters since they could first read. This has been practiced so often that it becomes more automatic, requires less attention, and therefore can be processed quicker.

Furthermore, models such as the ACT* (Anderson, 1992) support Schneider & Shiffrin's findings and predictions about automatic and controlled processing. Schneider & Shiffrin explain such processes in terms of their access to 'nodes' of knowledge: automatic processes generally have consistently good and direct access to nodes, which is why they require less attention, whereas controlled processes have more unstable and weaker access which must be tended to (via practice) in order to strengthen its access so that it eventually becomes a direct link/automatic process. This is supported by the semantic network structure of ACT*, and Loftus & Collins' addition of criteriality (1975).

Neuroscientific findings on the cerebellum and frontal cortex on learning also support these findings (Jansma et al., 2001). These parts of the brain are involved in the learning of new motor and cognitive actions, and, concerning automatic processing, they need to practice the skill repeatedly in order to keep the connections strong and active - automatic. This suggests that the ability to separate digits and letters may be because the neural connections for this task are often activated, and therefore run more efficiently.


However, it is worth noting that the lack of significance found in the interaction between task and condition did not support our third experimental hypothesis. This also contradicts Schneider & Shiffrin's finding that the number of targets would not vary the speed of the participants' responses in condition 2 because they are able to separate digits and letters so automatically. Looking at the direction of effect, the means have shown that, if anything, the response times for 4-target trials in the CM condition are actually higher than those in the VM condition.

Perhaps the reason for slower response times in the CM condition may be related to order effects. If we are to follow Schneider & Shiffrin’s reasoning, then the fact that the skill is very familiar and well-practiced, means it therefore requires little attention from the central executive. Not paying much attention may make participants more prone to slower response times. Concerning the insignificant finding, this too may have been due to additional contributing order effects. Our sample consisted of undergraduate Psychology students who may have been subjected to similar laboratory experiments in the past, and may also understand the theories being tested in this experiment. Furthermore, there may have been little incentive to participate ‘properly’ in the experiment. No rewards (such as money, credits, etc.) were promised in return for their participation, and the students were not self-selected.

 To other contributors: Not sure if this is entirely relevant, but we might want to expand this by suggesting how this finding can be applied in everyday scenarios of learning. After all, being able to do something automatically is a desirable attribute. The study still has a long way to go in terms of describing the conversion of controlled processing into automatic processing. What are the other factors that is has not considered? 


Schneider, W. and Shiffrin, R. M. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychological Review, 84, 1-66.

Personal tools