<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?collection=6&amp;output=omeka-xml&amp;page=3" accessDate="2026-05-01T20:16:28+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>3</pageNumber>
      <perPage>10</perPage>
      <totalResults>24</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="25" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="912">
                <text>The Effect of Sleep on the Processing of Emotional False Memories</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="913">
                <text>Chloe Newbury</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="914">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="915">
                <text>People often think they remember events and information that in fact never happened. In previous studies using the Deese-Roediger-McDermott (DRM) paradigm, participants viewed lists of semantically related words, and during testing were more likely to accept as seen words that were related to the lists but were actually unseen, indicating a false memory. Research suggests that sleep promotes this effect, as does the use of negatively valenced stimuli, although the effect of emotion is disputed. The current study investigated what effect emotion, in particular valence, has on false memory formation, and whether sleep promotes emotional false memories. Fifty participants were tested on their recognition performance using an emotional and neutral DRM paradigm after a 12-hour period of sleep or wake. As predicted, we found an increase in false recognition of negatively valenced lure words, as well as an overall effect of emotion, with emotional words leading to increased false recognition compared to neutral. We failed to replicate any sleep effect on performance accuracy of neutral or emotional memory, although the response time data indicates some effect of sleep on emotional memory performance. The quality of participants’ sleep and design of the current study are explored as possible explanations for this lack of a sleep effect. This study therefore indicates that emotion plays a significant role in the formation of false memories independent of sleep.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="916">
                <text>DRM&#13;
false memory</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="917">
                <text>Negative and positive DRM word-lists and critical lures were taken from Brainerd, Holliday, Reyna, Yang, and Toglia (2010) who controlled for other properties that are thought to affect false memory formation, including concreteness, meaning and frequency of words (Roediger, Watson, McDermott, &amp; Gallo, 2001). Neutral DRM lists and critical lures were taken from Stadler, Roediger, and McDermott (1999). Two separate lists were formed, one with negative and neutral words, and the other with positive and neutral words (see Appendix A for word-lists). Participants in both the positive and negative condition viewed the same five lists of neutral words, as well as ten negative or positive word-lists. &#13;
Mean valence and arousal scores for word-lists and critical lures were taken from the Affective Norms for English Words (ANEW) (Bradley &amp; Lang, 1999). Independent samples t-tests showed that positive words had significantly higher ratings of valence than negative t(11.41) = 7.42, p &lt; .001, and neutral words, t(13) = 7.43, p &lt; .001. Negative words had significantly lower ratings of valence than neutral words, t(13) = 2.31, p = .038. Furthermore, negative and positive word-lists did not significantly differ in terms of arousal, t(12.92) = 0.52, p = .613, however neutral words had significantly lower ratings of arousal than positive, t(13) = 2.67, p = .019, and negative words, t(13) = 4.87, p &lt; .001. It was also important that word-lists were controlled in terms of frequency and BAS. Frequency scores were taken from the MRC Psycholinguistic Database (Coltheart, 1981). Independent samples t-tests showed no significant difference in frequency ratings between negative and positive word-lists, t(18) = 0.18, p = .816, positive and neutral word-lists, t(13) = .35, p = .735, and negative and neutral word-lists, t(13) = 0.50, p = .624. BAS ratings were taken from the University of South Florida Free Association Norms (Nelson, McEvoy &amp; Schreiber, 1998). There was no significant difference in ratings of negative and positive words, t(18) = 4.92, p = .629, positive and neutral words, t(13) = 0.32, p = .757, and negative and neutral words, t(13) = 0.89, p = .391. (See Appendix B for mean ratings). &#13;
For critical lures, independent samples t-tests showed that positive lure words had higher ratings of valence than negative lures, t(15.11) = 11.20, p &lt; .001, and neutral lures, t(11) = 4.24, p = .001. Negative lures had significantly lower ratings of valence than neutral lures, t(11) = 3.62, p = .004. There was no reliable difference between ratings of arousal for negative and positive lures, t(18) = 0.22, p = .828, positive and neutral lures, t(11) = 1.08, p = .305, and negative and neutral lures, t(11) = 1.62, p = .134. There was no reliable difference between frequency ratings of negative and positive lures, t(18) = 1.14, p = .268, positive and neutral lures, t(13) = 0.55, p = .593, and negative and neutral lures, t(13) = 1.11, p = .287. (See Appendix B for mean ratings).&#13;
During testing, participants viewed 60 words in total; two previously seen from each DRM list (total of 30), the critical lure associated with each list (total of 15), and an unrelated word for each list (total of 15). Unrelated words were taken from lure words of unused DRM lists, as well as from Kousta, Vinson, and Vigliocco (2009), who developed emotional and neutral word-lists using the ANEW database. Unrelated words were matched to DRM word-lists in terms of valence, resulting in five unrelated neutral words, ten unrelated negative words and ten unrelated positive words. All words were presented in Courier new bold, black font, lower case and in 18-point. &#13;
Participants in the sleep condition were required to wear an actigraph sleep monitor to more accurately measure their time spent asleep and the number of awakenings. All participants were given a questionnaire before each session to collect data on sleep habits, caffeine and alcohol intake (see Appendix C), and those in the wake condition were instructed not to nap throughout the day. &#13;
Procedure&#13;
Participants were randomly allocated to either the wake or sleep group, with those in the wake group trained on word-lists at 9am and tested on the same day at 9pm. Those in the sleep group took part in the training session at 9pm, and were tested the following day at 9am. Participants were randomly allocated to the negative or positive stimuli condition. &#13;
During the training session, participants were first asked to fill out a questionnaire to assess sleep habits and caffeine and alcohol intake. Participants were then required to sit approximately 60cm from the computer screen, and were presented with 15 lists of 12 words presented one word at a time in the centre of the screen. They were first presented with a fixation point for 500ms before the words from one list were presented for 1500ms each. After each list participants were presented with three maths problems to solve for 1000ms each as a distractor task, in order to prevent participants from rehearsing words they had seen. Maths problems were presented in a random order for each participant, and each problem was only presented once throughout the task. After the three maths problems were presented, the fixation cross reappeared and participants were given another list to remember. The order of word-lists was randomised, and the order in which each word in a list was presented was also randomised. &#13;
Participants were then asked to return 12 hours later after a period of daytime wakefulness or overnight sleep. During the second session, participants first viewed a fixation cross for 500ms, and then the test words were presented to participants one at a time in the centre of the screen for 120ms. Participants were required to identify whether they thought they had seen the word in the previous session or not. They did this through the press of a key on the keypad, with a press of zero corresponding to an old word (previously seen), and one corresponding to a new word (previously unseen). The numbers zero and one on the keypad were labelled ‘old’ and ‘new’ respectively, to aid participants. Participants were not given a response deadline. Participants then saw the fixation point again 500ms after giving their response, before another word appeared on the screen. All words were presented in random order. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="918">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="919">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="920">
                <text>Newbury2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="921">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="922">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="923">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="924">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="925">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="926">
                <text>Padraic Monaghan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="927">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="928">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="929">
                <text>Fifty participants (32 female, 18 male) with a mean age of 25.10 (SD = 9.25, range 18 to 62) took part in the study for course credit or as a volunteer</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="930">
                <text>4-way mixed analysis of variance (ANOVA)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="23" public="1" featured="1">
    <fileContainer>
      <file fileId="5">
        <src>https://www.johnntowse.com/LUSTRE/files/original/fc27f6fa5aa3b5c2ec188de4cbeefc44.pdf</src>
        <authentication>2983d0be2c388322ede175f2da332d2c</authentication>
      </file>
      <file fileId="6">
        <src>https://www.johnntowse.com/LUSTRE/files/original/ae430f6c841f862e00a44f12d0df1e8a.pdf</src>
        <authentication>b9bd1185b1ff26c600843d03fd22e71c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="874">
                <text>Running Memory Span Development: The Input Mechanism and Hebb effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="875">
                <text>Yu Xie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="876">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="877">
                <text>It is unclear whether active strategy or passive strategy is used and whether the Hebb effect is elicited in the running memory task. The aim of this study was to explore the input mechanism and the Hebb effect in the running memory task via a developmental study. Children were asked to perform four working memory tasks: counting span task, free recall task, Hebb digit task, and running memory task. In order to explore the Hebb effect in the running memory task, the last three digits of every third list were repeated. The results suggested that running memory was a recency-based phenomenon and the Hebb effect is elicited in children. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="878">
                <text>running memory span development&#13;
input mechanism&#13;
Hebb effect&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="879">
                <text>The experiment was presented using SuperLab 4.0 on a Sony Laptop with a 14-inch colour screen. The responses of participants were recorded by the tester on answer sheets. Every child completed a counting span task, a free recall task, a Hebb digit task, and a running memory task.&#13;
Counting span task. The counting span arrays were developed from Towse and Hitch (1995) and consisted of equal number of target triangles and non-target squares. The target triangles were red, approximately 30 mm in length, and the non-target squares were blue, approximately 28 mm in length. The number of both target triangles and non-target squares varied from 3 to 9 (mean = 6). The counting span arrays were presented on the centre of the computer screen with a white background. The triangles and squares were randomly displayed at different positions in every display.  &#13;
Free recall task. For this task, 144 Chinese high-frequent two-syllable nouns (see Appendix A) were recorded by in a male’s voice at rate of 1 word per second. The words were recorded using Adobe Audition 3.0. Two practice lists and ten test lists were presented, and every list included 12 words at the rate of 1 word per second. The words were played by a computer.&#13;
Hebb digit task. All digit lists were created had the digits 1 to 9 in random order, avoiding any repetition of digits (see Appendix B). The voice of digits was recorded by Adobe Audition 3.0 at the rate of 1 digit per second. There were 2 practice lists and 24 test lists, and each list contained nine digits. Among the test lists, 16 lists were different, and the other 8 were the same – termed as Hebb list – presented on every third trial beginning on Trial 3. The 24 test lists were divided into 8 blocks, which involved 2 different lists and a Hebb list. &#13;
Running memory task. The lists included 12, 14, 16, 18, or 20 random digits from 1 to 9 (see Appendix C), which were recorded by voice. Two presentation rates were used in this task: 0.5 s per digit as the fast rate and 2.5 s per digit as the slow rate. In both conditions, there were 2 practice lists and 24 test lists. In order to test the Hebb effect in running memory task, the 24 test trials comprised 16 completely different lists, and 8 lists with the same last 3 digits which were the same and presented on every third trial. &#13;
Procedure &#13;
The experiment lasted 45 min, and every child completed 4 tasks. Each participant was seated on a chair in front of the computer screen, at a distance of 65 cm. All tasks included two practice trials for helping children be familiar with the procedure. Once children completed the practice trials and understood the procedure, they could proceed to the test trials. When children were performing the tasks, the experimenter gave no feedback about the accuracy of the words or digits. The order effect was counterbalanced as shown in the Table 1, which is a Latin Square design. Because there were two conditions in the running memory task, the fast speed and slow speed running, the tasks were counterbalanced. Therefore, in all, there were eight orders in the present study, and all children were equally divided into eight groups based on the eight orders. When participants completed each task, they were given sufficient time to rest. &#13;
Table 1&#13;
Task Orders for Four Tasks&#13;
Task&#13;
Orders&#13;
&#13;
a&#13;
b&#13;
c&#13;
d&#13;
e&#13;
f&#13;
g&#13;
h&#13;
Counting span task&#13;
1&#13;
2&#13;
3&#13;
4&#13;
1&#13;
2&#13;
3&#13;
4&#13;
Free recall task&#13;
2&#13;
1&#13;
4&#13;
3&#13;
2&#13;
1&#13;
4&#13;
3&#13;
Hebb digit task&#13;
3&#13;
4&#13;
1&#13;
2&#13;
3&#13;
4&#13;
1&#13;
2&#13;
Running memory task&#13;
4(FS)&#13;
3(FS)&#13;
2(FS)&#13;
1(FS)&#13;
4(SF)&#13;
3(SF)&#13;
2(SF)&#13;
1(SF)&#13;
Note. F = Fast-running memory task, S = Slow-running memory task.&#13;
Counting span task. The children were informed to the counting and recall tasks. Before every trial, a fixation symbol was displayed on the centre of screen for 0.5 s. When the target triangles and non-target squares were presented, participants were required to count the red triangles aloud, and repeat the final number. Once the children repeated the last number, the experimenter pressed the keyboard to show the next display, and the counting speeds were recorded by the computer automatically. There were three trials in every level and every trial included the n + 1 displays in level n. For example, participants counted 2 displays in level 1 and 3 displays in level 2. The final level was level 4, which contained 5 displays. After 2 to 5 displays, children were asked to report all the final numbers of red target triangles in the previous displays. If a child failed to recall correctly for at least two of the three trials, the counting span task was ended at that level; otherwise, they could progress to the next level. &#13;
Free recall task. Children were required to listen to some words, and repeat them as many as possible in any order, after the 12th word. The experimenter wrote down the responses of participants on answer sheets. If the children could not report a new word within 30 s, the experimenter would proceed to the next trial. &#13;
Hebb digit task. The procedure for the Hebb digit task was developed by Hebb (1961). Children were asked to listen to every list, and report all digits in the right order. Children reported the digits orally, and the experimenter recorded the response on an answer sheet. Because the running memory task also involved Hebb lists, 48 children were asked whether they were aware of any regular pattern in the digit tasks after they completed both Hebb digit task and running memory task. Only 5 participants noticed the repetition in the running memory and Hebb digit tasks.&#13;
Running memory task. Children were made to listen to some digits, different from those in the Hebb digit task; they were required to repeat the last three digits rather than all digits in the list. Two conditions were set to counterbalance the order effect: half of the children were administered the fast rate condition first and the other half were administered the slow rate condition first.&#13;
Scoring&#13;
Counting span task. Counting errors and counting speed were recorded and the scoring method used is the partial-credit unit scoring prescribed by Conway et al. (2005). Firstly, the correct items in each sequence were counted. If all items were correct in a sequence, this sequence was given one point. Otherwise, the score of a sequence was based on the proportion of correct items. Finally, the counting span of a participant was calculated as the sum the scores for all sequences. &#13;
Free recall task. The scoring method used was the one prescribed by Tulving and Colotla (1970), which involved the calculation of intratrial retention interval (ITRI). The ITRI value was the number of items between the presentation and the reported items. For instance, if the sequence is A, B, C, D, E, F, and G, and a participant reported G, F, and A. The ITRISs for the items were 0, 2, and 8, respectively. Before calculating the ITRI, the digit span of the Hebb non-repeating lists was calculated for every child. If the digit span of a child was 5, the item would be classified as a word from primary memory when the ITRI was 5 or less, whereas the item would be classified as a word from the secondary memory when the ITRI was 6 or more. &#13;
Hebb digit task. Every digit recalled correctly at the correct position was scored one point. The score of the non-repeating lists was the mean score of each non-repeating list, and the score of the repeating lists was the mean score of each repeating list. &#13;
Running memory task. The score for the running memory span was calculated using the mean number of digits in the right positions. If 3 digits were recalled in correct sequence, the score was 3; if the sequence of 2 digits (for example the first and second digit, the second and the third digit, or the first and third digit) was in the correct serial order the score was 2; if there was a single digit in the correct position, the score was 1. Similar to the Hebb digit task, the scores for non-repeating and repeating lists were separated.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="880">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="881">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="882">
                <text>Xie2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="883">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="884">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="885">
                <text>English&#13;
Chinese</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="886">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="887">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="888">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="889">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="890">
                <text>Developmental Psychology&#13;
Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="891">
                <text>Fifty-seven Chinese primary school students (23 female, 34 male), aged between 7 and 13 years (Mean = 9 years 6 months; SD = 1.754) took part in the present study. The children were recruited from Grade one to Grade six at Tianyi School in Xuancheng City</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="892">
                <text>ANOVA&#13;
t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="22" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="857">
                <text>The effect of different question types during shared book reading on children’s narrative comprehension</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="858">
                <text>Nicola Pooley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="859">
                <text>2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="860">
                <text>This study investigated the effect of different question types on narrative comprehension in young children. Forty one five year olds participated in this study. One group (N=14) received three sessions of shared storybook reading in which they practised answering questions about literal information in the story, during the course of the storybook reading. A second group (N=13) practiced answering questions about information that had to be inferred. A third group of controls (N=14), did not receive any intervention. All groups completed two comprehension assessments before and after the intervention: one was a measure of general listening comprehension, the other included measures of both literal and inferential comprehension. Children’s engagement during the storybook reading was also assessed. Contrary to predictions, neither intervention benefitted post-test comprehension significantly. In addition engagement levels did not change over the course of the study. However, a consistent pattern was found for each comprehension measure: the group who received practice with answering inferential questions made the greatest gains. Implications for early literacy experiences are discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="861">
                <text>reading comprehension</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="862">
                <text>Design&#13;
&#13;
The study was an intervention design with three phases: a pre-test, training phase and post-test. There were three groups: two experimental groups who participated in all phases and a control group who only completed the pre and post tests. The design is shown in Table One. In the pre and post test sessions, participants completed a general measure of listening comprehension (adapted from the Neale Analysis of Reading Ability II, Neale, 1997) and a bespoke measure of listening comprehension with questions to tap literal and inferential comprehension. Participants were assigned to groups on the basis of their scores in the pre-test so that the three groups (two intervention and one control) did not differ in their performance on these measures (see Table Five). Children in the intervention conditions listened to three stories in separate sessions and either received practice at answering literal or inferential questions throughout the stories. In the post test all children were again assessed on alternate forms of the same measures used in the pre-test.&#13;
&#13;
&#13;
&#13;
Table 1. Intervention design used.&#13;
&#13;
&#13;
Group&#13;
&#13;
&#13;
Pre Test&#13;
&#13;
&#13;
1&#13;
&#13;
Training&#13;
&#13;
2&#13;
&#13;
&#13;
3&#13;
&#13;
&#13;
Post test&#13;
&#13;
Control&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
x&#13;
&#13;
x&#13;
&#13;
x&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Literal&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Literal&#13;
&#13;
Literal&#13;
&#13;
Literal&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Inferential&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Inferential&#13;
&#13;
Inferential&#13;
&#13;
Inferential&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
&#13;
Materials: Pre and Post test.&#13;
&#13;
General Measure of Listening Comprehension. The following stories taken from the Neale Analysis of Reading Ability (NARA, Neale, 1997) were read to each participant in either the pre or post test: Toys, Tree house, Lost and Found, Road Safety. Toys or Lost and Found were used as practice tasks at the beginning of the pre/post test to help develop rapport. The stories were chosen so that the level of difficulty was consistent pre and post test. Comprehension questions that went with each story were asked at the end of the story to obtain the general measure of listening comprehension score. Table Two shows an example of a story and some of the questions given.&#13;
&#13;
&#13;
Table 2. Example of general listening comprehension story used.&#13;
&#13;
General Comprehension Example&#13;
&#13;
Sample of Questions asked&#13;
&#13;
My friend and I made a tree house. We like to hide in it. We climb up the rope and pull it up after us. Then no-one knows where we are. We play space-ships. At tea time we slide down fast and we are always first for tea.&#13;
&#13;
What would you say was the best name for that story?&#13;
&#13;
Who built the tree house?&#13;
&#13;
How did the children always manage to be first for tea?&#13;
&#13;
&#13;
Bespoke Measure of Listening Comprehension. The stories used in this study were a series of books about a dog named ‘Harry’ written by Gene Zion. These stories were chosen as they were first published between 1956 and 1965 and so were suitable for this age group but the children were not likely to be familiar with them. The pictures from the stories were scanned then printed on A4 sheets and laminated to make a set of wordless picture books. The original text was retained for each story, however, small sections of some of the stories were omitted to try and keep each story the same length.&#13;
&#13;
Two different types of questions were used in the bespoke measure of listening comprehension: literal and inferential questions. In the pre and post tests each child received eight literal questions and eight inferential questions after each story reading. The literal questions required the participants to recall facts from the text. The inferential questions tapped children’s ability to make inferences about information that was not stated explicitly in the text. These questions were designed to address: causality (why an event happened), emotions (how a character was feeling) and future events (what might happen next in the story). The Inferential questions in the pre and post test, however, consisted of four emotion and four causality questions as prediction questions could not be used at the end of the story. Table Three gives examples of literal and inferential questions used.&#13;
&#13;
Table 3. Examples of literal and inferential questions used.&#13;
&#13;
Extract&#13;
&#13;
Question&#13;
&#13;
1. Harry was a white dog with black spots who liked everything except having a bath. So one day when he heard the water running in the tub he took the scrubbing brush and buried it in the back garden.&#13;
&#13;
&#13;
&#13;
Literal: What did Harry bury in the back garden?&#13;
&#13;
Forced choice: the scrubbing brush/a sponge.&#13;
&#13;
Causal Inferential: Why do you think Harry buried the scrubbing brush in the back garden?&#13;
&#13;
Forced choice: Because the family told him to/Because he did not want a bath.&#13;
&#13;
2. That night Harry slept in the dog house – again.&#13;
&#13;
Literal Question: Where was Harry made to sleep again? In the Kitchen/in the dog house.&#13;
&#13;
Emotion Inference Question: How do you think Harry felt about sleeping in the dog house?&#13;
&#13;
Forced choice: happy/sad.&#13;
&#13;
&#13;
3. (After a sequence of events that lead to Harry being covered in seaweed and thinking the hot dog man was calling his name.) Harry still thought the man was calling his name. He barked and jumped with joy. He jumped so much that suddenly…&#13;
&#13;
Literal Question: (before - he jumped so much…) What was the hot dog man really shouting? Hurry/Harry&#13;
&#13;
Prediction Inferential question*: What do you think happened next?&#13;
&#13;
Forced choice: Everyone ran away/ the seaweed fell off him.&#13;
&#13;
&#13;
*Please note. These were only used during the intervention sessions.&#13;
&#13;
Materials: Intervention&#13;
&#13;
Three of the stories were used for the intervention sessions. Scripts were produced that incorporated the questions for the intervention sessions during the stories. In the inferential intervention group there were four of each question type: causal, emotion and prediction. The inferential and literal questions were always placed at the same point in the story.&#13;
&#13;
Procedure&#13;
&#13;
Phase One: Pre-test. Children in all groups completed the general listening comprehension measure and the bespoke measure of literal and inferential comprehension. Each child was tested individually in a quiet space away from the classroom. The pre-test session was audio and video recorded. The video recorder was set in front of the participant to capture their direction of eye gaze. The experimenter explained the task to the child and obtained verbal consent. In the pre-test the experimenter asked the child if they had heard any stories about Harry the dog while showing them the front cover. One child reported recognising the story, but could not remember any details.&#13;
&#13;
General Listening Comprehension Measure. Each participant was read two stories, the first acted as a practice task to help develop rapport. Immediately after each story the children were asked the comprehension questions for that story. If a child could not answer a question then the experimenter offered the correct response and moved onto the next question. If the child gave the incorrect answer then the experimenter did not highlight that this was incorrect but simply moved onto the next question. The decision to respond to answers in this way was based on the pilot of the procedure. This age group seemed to become easily disengaged if they supplied no answer on a number of occasions or incorrect answers and it was felt that this way of responding helped to maintain their confidence and interest in the task. Responses were scored as correct or incorrect. Acceptable answers were provided in the NARA manual.&#13;
&#13;
Bespoke Listening Comprehension Measure. After the assessment of general listening comprehension each child completed the bespoke listening comprehension task. The experimenter read out the story whilst the child followed the pictures in a wordless picture book version. At the end of the story sixteen questions were asked: 8 literal and 8 inferential, of which four were causal and four were emotion related. If the child could not answer a question or gave the wrong answer then s/he was offered a forced choice of two possible answers (examples in Table Three). One option was the correct target answer and one was incorrect. The forced choices were included in the pre/post test as they were also used during the intervention; however, answers based on a forced choice were not included in the analysis. In the pre and post-test if the child chose the correct response then the experimenter agreed with the child and moved onto the next question. If the child selected the incorrect option, the experimenter also continued with the next question. The decision was taken not to correct the child at this stage as if the child was still getting the answer incorrect despite assistance then giving them the correct answer may change the representation they had created of the story and also have an effect on their confidence as mentioned earlier. The forced choices were alternated so that the correct answer occurred equally in first and second positions across items. When scoring the responses if the child gave the correct answer unaided (i.e. without the forced choice option) then they were given one point. All other responses were scored zero.&#13;
&#13;
Phase two: Intervention (Intervention groups only). The intervention sessions took place the week after the selection phase, on three consecutive days. On each day, each child in the intervention groups was tested individually in a quiet space away from the classroom and the session was audio-recorded. Different stories were used in each session. As the stories were read to the participant they were asked questions (either literal or inferential depending on group assignment) about the story content. Children in the control group were not read to by the experimenter during this phase.&#13;
&#13;
Literal Questions Intervention Group. Children in this condition were read one story in each of the three intervention sessions and asked twelve questions that assessed their understanding of explicit details in the story, e.g., ‘What did the lady next door sing louder than?’ The questions were positioned throughout the text and related directly to information that had just been given in the story. If the children gave no response or an incorrect response they were offered the forced choice. If a child still gave an incorrect answer after being given the forced choices then the experimenter corrected them and offered the correct answer. This was to try and ensure that the children were building accurate representations as they listened to the stories.&#13;
&#13;
Inferential Questions Intervention Group. The same stories and question-response technique were used as outlined in the literal questions condition. Questions were also placed at the same position in the text, however, children in this condition were asked twelve inferential questions throughout each story that required them to think beyond the facts present in the text. In each story there were four causal inferential questions, e.g., ‘Why were Harry’s ears hurting?’ four prediction questions, e.g., ‘What do you think Harry did next?’ and four questions assessing understanding of the emotions of the characters. e.g., ‘How do you think Harry felt when the old lady told him to go away?’&#13;
&#13;
Phase three: Post-test. This session took place between five and seven days after the final intervention session and followed the same format as the pre-test. Children in all three groups completed the general listening comprehension story and the bespoke listening comprehension story with literal and inferential questions asked at the end of the story.&#13;
&#13;
Measure of Engagement. The video recordings from the pre and post-test were analysed for the children’s level of engagement. This was only based on the child’s behaviour during the reading of the bespoke listening comprehension story. The coding scheme used for this analysis is shown in Table Four. A second rater scored 20% of the pre-test videos. There was 100% agreement between raters.&#13;
&#13;
Table 4. Coding scheme used to analyse level of engagement while listening to the bespoke story.&#13;
&#13;
Code&#13;
&#13;
Description of Behaviour&#13;
&#13;
1&#13;
&#13;
Limited Engagement. The child appears off-task and makes a large number of unrelated comments or is distracted and looking away for a large part of the story reading.&#13;
&#13;
2&#13;
&#13;
Engaged- Quiet. The child looks at the pictures and listens well throughout the story but does not make any independent comments.&#13;
&#13;
3&#13;
&#13;
Engaged – Interactive. The child looks at the pictures and listens well throughout the story. They also make independent comments relating the events in the story to their lives/elaborate on the text/ ask questions about the text.&#13;
&#13;
&#13;
Group Assignment.&#13;
&#13;
Scores on the pre-test measures were used to assign the children to groups to ensure an equal range of scores in each. One-way Analysis of Variance was carried out on the general comprehension scores, literal and inferential scores. All F&lt;1.0 and all p&gt;0.1. In addition, where possible, an equal number of boys and girls were assigned to each group. Table Five shows the ages, number of boys and girls and pre-test scores for each group.&#13;
&#13;
Table 5. Distribution of gender, age and pre-test scores across groups.&#13;
&#13;
Variable&#13;
&#13;
Control&#13;
&#13;
Literal&#13;
&#13;
Inferential&#13;
&#13;
Gender Male&#13;
&#13;
Female&#13;
&#13;
8&#13;
&#13;
6&#13;
&#13;
7&#13;
&#13;
7&#13;
&#13;
7&#13;
&#13;
6&#13;
&#13;
Age (years; months)&#13;
&#13;
5;5&#13;
&#13;
5;5&#13;
&#13;
5;4&#13;
&#13;
General Comprehension (proportion)&#13;
&#13;
0.43&#13;
&#13;
0.46&#13;
&#13;
0.46&#13;
&#13;
Bespoke Literal (max=8)&#13;
&#13;
3.79&#13;
&#13;
3.79&#13;
&#13;
4.15&#13;
&#13;
Bespoke Inferential (max=8)&#13;
&#13;
5.0&#13;
&#13;
4.50&#13;
&#13;
4.77&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="863">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="864">
                <text>Pooley2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="865">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="866">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="867">
                <text>Project description</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="868">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="869">
                <text>Kate Cain</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="870">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="871">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="872">
                <text>43 children (23 boys, 20 girls, mean age 5 years 4 months and range 4 years 9 months to 5 years 9 months) in their first year of primary school</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="873">
                <text>Chi-squared&#13;
Mcnemar test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="21" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="839">
                <text>The Specificity of Inhibitory Impairments in Autism and Their Relation to ADHD-type Symptoms</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="840">
                <text>Charlotte Sanderson</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="841">
                <text>2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="842">
                <text>Findings on inhibitory control in autism have been inconsistent. It is proposed that this may be partly task-related, with different ‘inhibition’ tasks tapping different classes of inhibitory ability. Thus, children with autism (CWA) (N = 31) and typically developing controls (TDC) (N = 28) matched for verbal and non-verbal mental age completed three tasks of inhibitory control, each representing different inhibition subcomponents: a Go/No-Go task (delay inhibition), the Dog-Pig Stroop task (conflict inhibition), and a Flanker task (resistance to distractor inhibition). Behavioural ratings of inattention and hyperactivity/impulsivity were also obtained for each child to consider a possible source of heterogeneity in inhibitory ability. It was predicted that the conflict task would be more problematic for CWA, and that higher ADHD-symptom ratings would predict poorer performance. On the Go/No-Go task, CWA showed superior inhibitory function to controls – making fewer false alarm errors and better task sensitivity. On the Dog-Pig Stroop, CWA showed impaired performance compared to controls – making more accuracy and speed related inhibitory errors. On the Flanker task, CWA showed equivalent inhibitory performance to TD children. Inhibitory impairments were predicted by high ratings of inattention in CWA, but only on the Dog-Pig Stroop. It is argued that CWA are perhaps impaired on tasks of conflict, but not delay or resistance to distractor inhibition. This may reflect the additional working memory demands of these tasks, and suggests that inhibitory difficulty is not a core executive deficit in autism. Symptoms of inattention may be an important predictor of inhibitory heterogeneity amongst CWA.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="843">
                <text>inhibition&#13;
Stroop&#13;
autism</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="844">
                <text>Sessions were completed in a well-lit and quiet room, free of distractions. Participants were tested individually, and completed the three inhibitory control tasks (Go/No-Go task; Dog-Pig Stroop task; Flanker task) and two standardised measures (RCPM; BPVS) in counterbalanced order. Experimental session lasted approximately 40-60 minutes.&#13;
&#13;
All three inhibition tasks were written using Psyscript, and run on a computer using an OS X 10.6 operating system.&#13;
&#13;
&#13;
Go/No-Go Task&#13;
&#13;
Task Design.On each trial, a shape (O, ∆, ⧠, or ◊) would appear centrally on the computer screen. The shapes were simple black line-drawings, subtending approximately 5° vertically and horizontally. Prior to the task, children were instructed to respond to three of the shapes by pressing a large external “star” button (i.e. “Go” stimuli), but to resist responding to a fourth shape (i.e. the “No-Go” stimulus). The shape designated as the “No-Go” stimulus was counterbalanced between participants. To generate a prepotent response, 75% of trials were “Go” trials requiring a button press, and 25% of trials were “No-Go” trials where the response should be withheld.&#13;
&#13;
The maximum inter-stimulus interval (ISI) (i.e. from stimulus onset to stimulus onset) was 2500ms. At the start of each trial, a fixation cross would appear at the centre of the screen for 200ms. This was then replaced by the stimulus, which remained on-screen for 200ms. After the stimulus offset, participants had a further 1000ms to respond, at which point the trial automatically terminated. Stimulus presentation was followed by a 1100ms pause before the next trial commenced. An error tone (“bleep”) was played immediately if the child made an omission error (i.e. failed to respond on a “Go” trial), or a false alarm (i.e. pressed the star button on a “No-Go” trial). A positive feedback-noise (“ping”) was played if the participant made a correct response.&#13;
&#13;
Procedure. Before starting the task, each child completed a warm up session to familiarize with the “Go” and “No-Go” stimuli. Training was terminated only when the child could correctly identify the required response for each shape. Children then completed a short practice block of eight trials containing all four stimuli presented in a fixed, but superficially random order. Then followed 144 experimental trials split into three 48-trial blocks, each separated by a short break. Stimulus presentation was randomised throughout each half block to avoid clustering of “No-Go” trials. The task (including training) lasted approximately ten minutes.&#13;
&#13;
&#13;
Four measures of task performance were obtained:&#13;
&#13;
1. Number of false alarms (or commission errors): “No-Go” trials on which the button was pressed. This is the main measure of inhibitory control, with false alarms representing failure to inhibit the prepotent button-press response.&#13;
&#13;
2. Number of hits: “Go” trials on which the child responded. This is not a main measure of inhibitory control performance, but indicates how reliably participants detect targets when present and suggest the strength of the prepotent response generated. Hit-rates are also used for calculations of task sensitivity (see below).&#13;
&#13;
3. Task Sensitivity: Estimates of participants’ task sensitivity can be calculated using signal detection theory (A0) and probability estimates of False Alarms and Hits. This permits differentiation between participants who make fewer false alarms, but also fewer hits (poor task sensitivity), and those who make fewer false alarms despite a good hit rate (good task sensitivity). This is important because a low false alarm rate could be due to a generally low response rate (for both “Go” and “No Go” stimuli). Task sensitivity (A0) is a nonparametric measure which ranges from 0.5 (chance performance) to 1 (perfect sensitivity), and is calculated as follows (Grier, 1971):&#13;
&#13;
&#13;
&#13;
&#13;
A = 0.5 (H-FA) (1+H-FA) / [4H (1-FA)]&#13;
&#13;
&#13;
Where, H = probability (Hits), FA = probability (False Alarms).&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
4. Hit Trial Reaction Time (RT): Although not a measure of inhibitory control per se, this might indicate between-group differences in processing speed and/or task-strategy.&#13;
&#13;
&#13;
Dog-Pig Stroop Task.&#13;
&#13;
Task Design. The stimuli used in this task were two simple line drawings of a dog and a pig (see Appendix 4). Stimuli were presented centrally on the computer screen, subtending approximately 6° vertically and 9° horizontally. Two experimental conditions, each containing 32 trials were administered. In the control (baseline) condition, children were simply instructed to say “dog” when they see the dog-image, and “pig” when they see a pig as quickly possible. In the Stroop (i.e. inhibition) condition, children were instructed to say "dog" to pig images, and "pig" to dog-images, as quickly and accurately as possible.&#13;
&#13;
Children’s responses were recorded by an assistant during the task, and also audiotaped so that manuscripts could be subsequently checked by the experimenter. If a child made a mistake on a trial and then corrected themselves, their initial response was recorded. To estimate response latency on each trial, the experimenter would press a large external button as soon as the child made their initial response. Although this measure of reaction time is relatively crude, many of the children taking part would not have been testable with throat microphones which measure voice-onset. These technologies are highly sensitive to all sounds including subtle body movements, lip-smacks and vocalizations, reducing their reliability for use with participants who might have difficulty minimizing task-irrelevant movement or vocalizations. It is also notable that the additional error in reaction-time estimates induced by this method would be constant across groups.&#13;
&#13;
On each trial, the stimulus remained centrally on-screen until a response had been registered (i.e. the response button had been pressed). If no response had been registered after 3000ms had elapsed, the trial automatically terminated, and the message “Too Slow” was presented for 500ms. Stimulus presentation was followed by a 2000ms pause (inter-trial interval) before the next trial commenced. The maximum ISI was thus 5500ms.&#13;
&#13;
Procedure. All children completed the control condition first to provide a measure of baseline picture naming speed and accuracy[2]. After the control condition had been completed, children were presented with training slides to familiarise them with the Stroop naming procedure. After successfully completing the four practice trials, children would commence the 32-trial Stroop condition block. The task (including training) lasted approximately 7 minutes.&#13;
&#13;
&#13;
Flanker Task&#13;
&#13;
Task Design. For this computer task, children were presented with two large arrow-shaped buttons – one pointing left and one pointing right. There were three experimental conditions: baseline, congruent, and incongruent. Children were asked to respond by pressing the arrow-button pointing the same way as the white target arrow, which was positioned centrally, subtending approximately 4° vertically and 6° horizontally. On baseline trials, the white target arrow was presented on its own. On congruent trials, the white target arrow was flanked by four red ‘distractor’ arrows pointing the same way as the target (e.g. ààààà). On incongruent trials, the white target arrow was flanked by four red ‘distractor’ arrows facing in the opposite direction to the target arrow (e.g. ßßàßß). It is thus only on incongruent trials that the distractors must be actively inhibited/suppressed for correct target identification.&#13;
&#13;
The maximum ISI was 2900ms. A fixation cross would appear centrally on-screen for 200ms. This was then replaced by the stimulus (neutral, congruent or incongruent), which remained on-screen until a button-press had been registered. If no response had been registered after 1200ms had elapsed, the trial automatically terminated. An error-tone (“bleep”) was played if the participant pressed the wrong arrow-button . If the child failed to respond before the trial terminated, an error-tone was played and a “Too-Slow” message was briefly displayed. When the child responded correctly, a positive feedback-noise was given (a “ping”). There was a 1100ms pause (inter-trial interval) between trials.&#13;
&#13;
Procedure. Each child first completed a series of familiarisation trials. This was followed by three blocks of 30 trials separated by a short break (90 trials in total). Each block contained ten baseline, ten congruent and ten distractor trials, which were distributed randomly. Error-rates and mean reaction times (RT) for neutral, congruent and incongruent trials were recorded.&#13;
&#13;
&#13;
[1] Although a cut-off of 30-points is typically used with younger children, a slightly lower cut-off score is thought to be more accurate for use with older children/adolescents (Mesibov et al., 1989). This is due to the inclusion of one or two items on which older children with autism tend not to score highly (e.g. imitation).&#13;
&#13;
[2]Condition-order was fixed because a pilot study showed that if children completed the experimental (i.e. Stroop) condition first they had difficulty forgetting the ‘opposite’ rule in order to name the pictures normally for the control condition. This was shown by elevated error-rates and poorer naming speeds. Therefore, in order to obtain a realistic measure of ‘automatic’ (i.e. control-condition) picture naming speed and accuracy, and a stronger prepotent response, it was decided appropriate to fix the order of condition presentation (Control, then Stroop). Although this may lead to practice effects, this effect is constant across groups.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="845">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="846">
                <text>sanderson2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="847">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="848">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="849">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="850">
                <text>project description</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="851">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="852">
                <text>Melissa Allen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="853">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="854">
                <text>Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="855">
                <text>Autism group. Thirty-five individuals with autism, aged between 6 and 18 years&#13;
Control group. Thirty typically developing (TD) children, aged between 6 and 11 years, were recruited from three state primary schools </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="856">
                <text>ANOVA&#13;
MANOVA&#13;
Chi squared&#13;
correlation</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
