<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?collection=6&amp;output=omeka-xml&amp;sort_field=added" accessDate="2026-05-01T20:13:38+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>1</pageNumber>
      <perPage>10</perPage>
      <totalResults>24</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="21" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="839">
                <text>The Specificity of Inhibitory Impairments in Autism and Their Relation to ADHD-type Symptoms</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="840">
                <text>Charlotte Sanderson</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="841">
                <text>2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="842">
                <text>Findings on inhibitory control in autism have been inconsistent. It is proposed that this may be partly task-related, with different ‘inhibition’ tasks tapping different classes of inhibitory ability. Thus, children with autism (CWA) (N = 31) and typically developing controls (TDC) (N = 28) matched for verbal and non-verbal mental age completed three tasks of inhibitory control, each representing different inhibition subcomponents: a Go/No-Go task (delay inhibition), the Dog-Pig Stroop task (conflict inhibition), and a Flanker task (resistance to distractor inhibition). Behavioural ratings of inattention and hyperactivity/impulsivity were also obtained for each child to consider a possible source of heterogeneity in inhibitory ability. It was predicted that the conflict task would be more problematic for CWA, and that higher ADHD-symptom ratings would predict poorer performance. On the Go/No-Go task, CWA showed superior inhibitory function to controls – making fewer false alarm errors and better task sensitivity. On the Dog-Pig Stroop, CWA showed impaired performance compared to controls – making more accuracy and speed related inhibitory errors. On the Flanker task, CWA showed equivalent inhibitory performance to TD children. Inhibitory impairments were predicted by high ratings of inattention in CWA, but only on the Dog-Pig Stroop. It is argued that CWA are perhaps impaired on tasks of conflict, but not delay or resistance to distractor inhibition. This may reflect the additional working memory demands of these tasks, and suggests that inhibitory difficulty is not a core executive deficit in autism. Symptoms of inattention may be an important predictor of inhibitory heterogeneity amongst CWA.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="843">
                <text>inhibition&#13;
Stroop&#13;
autism</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="844">
                <text>Sessions were completed in a well-lit and quiet room, free of distractions. Participants were tested individually, and completed the three inhibitory control tasks (Go/No-Go task; Dog-Pig Stroop task; Flanker task) and two standardised measures (RCPM; BPVS) in counterbalanced order. Experimental session lasted approximately 40-60 minutes.&#13;
&#13;
All three inhibition tasks were written using Psyscript, and run on a computer using an OS X 10.6 operating system.&#13;
&#13;
&#13;
Go/No-Go Task&#13;
&#13;
Task Design.On each trial, a shape (O, ∆, ⧠, or ◊) would appear centrally on the computer screen. The shapes were simple black line-drawings, subtending approximately 5° vertically and horizontally. Prior to the task, children were instructed to respond to three of the shapes by pressing a large external “star” button (i.e. “Go” stimuli), but to resist responding to a fourth shape (i.e. the “No-Go” stimulus). The shape designated as the “No-Go” stimulus was counterbalanced between participants. To generate a prepotent response, 75% of trials were “Go” trials requiring a button press, and 25% of trials were “No-Go” trials where the response should be withheld.&#13;
&#13;
The maximum inter-stimulus interval (ISI) (i.e. from stimulus onset to stimulus onset) was 2500ms. At the start of each trial, a fixation cross would appear at the centre of the screen for 200ms. This was then replaced by the stimulus, which remained on-screen for 200ms. After the stimulus offset, participants had a further 1000ms to respond, at which point the trial automatically terminated. Stimulus presentation was followed by a 1100ms pause before the next trial commenced. An error tone (“bleep”) was played immediately if the child made an omission error (i.e. failed to respond on a “Go” trial), or a false alarm (i.e. pressed the star button on a “No-Go” trial). A positive feedback-noise (“ping”) was played if the participant made a correct response.&#13;
&#13;
Procedure. Before starting the task, each child completed a warm up session to familiarize with the “Go” and “No-Go” stimuli. Training was terminated only when the child could correctly identify the required response for each shape. Children then completed a short practice block of eight trials containing all four stimuli presented in a fixed, but superficially random order. Then followed 144 experimental trials split into three 48-trial blocks, each separated by a short break. Stimulus presentation was randomised throughout each half block to avoid clustering of “No-Go” trials. The task (including training) lasted approximately ten minutes.&#13;
&#13;
&#13;
Four measures of task performance were obtained:&#13;
&#13;
1. Number of false alarms (or commission errors): “No-Go” trials on which the button was pressed. This is the main measure of inhibitory control, with false alarms representing failure to inhibit the prepotent button-press response.&#13;
&#13;
2. Number of hits: “Go” trials on which the child responded. This is not a main measure of inhibitory control performance, but indicates how reliably participants detect targets when present and suggest the strength of the prepotent response generated. Hit-rates are also used for calculations of task sensitivity (see below).&#13;
&#13;
3. Task Sensitivity: Estimates of participants’ task sensitivity can be calculated using signal detection theory (A0) and probability estimates of False Alarms and Hits. This permits differentiation between participants who make fewer false alarms, but also fewer hits (poor task sensitivity), and those who make fewer false alarms despite a good hit rate (good task sensitivity). This is important because a low false alarm rate could be due to a generally low response rate (for both “Go” and “No Go” stimuli). Task sensitivity (A0) is a nonparametric measure which ranges from 0.5 (chance performance) to 1 (perfect sensitivity), and is calculated as follows (Grier, 1971):&#13;
&#13;
&#13;
&#13;
&#13;
A = 0.5 (H-FA) (1+H-FA) / [4H (1-FA)]&#13;
&#13;
&#13;
Where, H = probability (Hits), FA = probability (False Alarms).&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
4. Hit Trial Reaction Time (RT): Although not a measure of inhibitory control per se, this might indicate between-group differences in processing speed and/or task-strategy.&#13;
&#13;
&#13;
Dog-Pig Stroop Task.&#13;
&#13;
Task Design. The stimuli used in this task were two simple line drawings of a dog and a pig (see Appendix 4). Stimuli were presented centrally on the computer screen, subtending approximately 6° vertically and 9° horizontally. Two experimental conditions, each containing 32 trials were administered. In the control (baseline) condition, children were simply instructed to say “dog” when they see the dog-image, and “pig” when they see a pig as quickly possible. In the Stroop (i.e. inhibition) condition, children were instructed to say "dog" to pig images, and "pig" to dog-images, as quickly and accurately as possible.&#13;
&#13;
Children’s responses were recorded by an assistant during the task, and also audiotaped so that manuscripts could be subsequently checked by the experimenter. If a child made a mistake on a trial and then corrected themselves, their initial response was recorded. To estimate response latency on each trial, the experimenter would press a large external button as soon as the child made their initial response. Although this measure of reaction time is relatively crude, many of the children taking part would not have been testable with throat microphones which measure voice-onset. These technologies are highly sensitive to all sounds including subtle body movements, lip-smacks and vocalizations, reducing their reliability for use with participants who might have difficulty minimizing task-irrelevant movement or vocalizations. It is also notable that the additional error in reaction-time estimates induced by this method would be constant across groups.&#13;
&#13;
On each trial, the stimulus remained centrally on-screen until a response had been registered (i.e. the response button had been pressed). If no response had been registered after 3000ms had elapsed, the trial automatically terminated, and the message “Too Slow” was presented for 500ms. Stimulus presentation was followed by a 2000ms pause (inter-trial interval) before the next trial commenced. The maximum ISI was thus 5500ms.&#13;
&#13;
Procedure. All children completed the control condition first to provide a measure of baseline picture naming speed and accuracy[2]. After the control condition had been completed, children were presented with training slides to familiarise them with the Stroop naming procedure. After successfully completing the four practice trials, children would commence the 32-trial Stroop condition block. The task (including training) lasted approximately 7 minutes.&#13;
&#13;
&#13;
Flanker Task&#13;
&#13;
Task Design. For this computer task, children were presented with two large arrow-shaped buttons – one pointing left and one pointing right. There were three experimental conditions: baseline, congruent, and incongruent. Children were asked to respond by pressing the arrow-button pointing the same way as the white target arrow, which was positioned centrally, subtending approximately 4° vertically and 6° horizontally. On baseline trials, the white target arrow was presented on its own. On congruent trials, the white target arrow was flanked by four red ‘distractor’ arrows pointing the same way as the target (e.g. ààààà). On incongruent trials, the white target arrow was flanked by four red ‘distractor’ arrows facing in the opposite direction to the target arrow (e.g. ßßàßß). It is thus only on incongruent trials that the distractors must be actively inhibited/suppressed for correct target identification.&#13;
&#13;
The maximum ISI was 2900ms. A fixation cross would appear centrally on-screen for 200ms. This was then replaced by the stimulus (neutral, congruent or incongruent), which remained on-screen until a button-press had been registered. If no response had been registered after 1200ms had elapsed, the trial automatically terminated. An error-tone (“bleep”) was played if the participant pressed the wrong arrow-button . If the child failed to respond before the trial terminated, an error-tone was played and a “Too-Slow” message was briefly displayed. When the child responded correctly, a positive feedback-noise was given (a “ping”). There was a 1100ms pause (inter-trial interval) between trials.&#13;
&#13;
Procedure. Each child first completed a series of familiarisation trials. This was followed by three blocks of 30 trials separated by a short break (90 trials in total). Each block contained ten baseline, ten congruent and ten distractor trials, which were distributed randomly. Error-rates and mean reaction times (RT) for neutral, congruent and incongruent trials were recorded.&#13;
&#13;
&#13;
[1] Although a cut-off of 30-points is typically used with younger children, a slightly lower cut-off score is thought to be more accurate for use with older children/adolescents (Mesibov et al., 1989). This is due to the inclusion of one or two items on which older children with autism tend not to score highly (e.g. imitation).&#13;
&#13;
[2]Condition-order was fixed because a pilot study showed that if children completed the experimental (i.e. Stroop) condition first they had difficulty forgetting the ‘opposite’ rule in order to name the pictures normally for the control condition. This was shown by elevated error-rates and poorer naming speeds. Therefore, in order to obtain a realistic measure of ‘automatic’ (i.e. control-condition) picture naming speed and accuracy, and a stronger prepotent response, it was decided appropriate to fix the order of condition presentation (Control, then Stroop). Although this may lead to practice effects, this effect is constant across groups.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="845">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="846">
                <text>sanderson2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="847">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="848">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="849">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="850">
                <text>project description</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="851">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="852">
                <text>Melissa Allen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="853">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="854">
                <text>Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="855">
                <text>Autism group. Thirty-five individuals with autism, aged between 6 and 18 years&#13;
Control group. Thirty typically developing (TD) children, aged between 6 and 11 years, were recruited from three state primary schools </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="856">
                <text>ANOVA&#13;
MANOVA&#13;
Chi squared&#13;
correlation</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="22" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="857">
                <text>The effect of different question types during shared book reading on children’s narrative comprehension</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="858">
                <text>Nicola Pooley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="859">
                <text>2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="860">
                <text>This study investigated the effect of different question types on narrative comprehension in young children. Forty one five year olds participated in this study. One group (N=14) received three sessions of shared storybook reading in which they practised answering questions about literal information in the story, during the course of the storybook reading. A second group (N=13) practiced answering questions about information that had to be inferred. A third group of controls (N=14), did not receive any intervention. All groups completed two comprehension assessments before and after the intervention: one was a measure of general listening comprehension, the other included measures of both literal and inferential comprehension. Children’s engagement during the storybook reading was also assessed. Contrary to predictions, neither intervention benefitted post-test comprehension significantly. In addition engagement levels did not change over the course of the study. However, a consistent pattern was found for each comprehension measure: the group who received practice with answering inferential questions made the greatest gains. Implications for early literacy experiences are discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="861">
                <text>reading comprehension</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="862">
                <text>Design&#13;
&#13;
The study was an intervention design with three phases: a pre-test, training phase and post-test. There were three groups: two experimental groups who participated in all phases and a control group who only completed the pre and post tests. The design is shown in Table One. In the pre and post test sessions, participants completed a general measure of listening comprehension (adapted from the Neale Analysis of Reading Ability II, Neale, 1997) and a bespoke measure of listening comprehension with questions to tap literal and inferential comprehension. Participants were assigned to groups on the basis of their scores in the pre-test so that the three groups (two intervention and one control) did not differ in their performance on these measures (see Table Five). Children in the intervention conditions listened to three stories in separate sessions and either received practice at answering literal or inferential questions throughout the stories. In the post test all children were again assessed on alternate forms of the same measures used in the pre-test.&#13;
&#13;
&#13;
&#13;
Table 1. Intervention design used.&#13;
&#13;
&#13;
Group&#13;
&#13;
&#13;
Pre Test&#13;
&#13;
&#13;
1&#13;
&#13;
Training&#13;
&#13;
2&#13;
&#13;
&#13;
3&#13;
&#13;
&#13;
Post test&#13;
&#13;
Control&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
x&#13;
&#13;
x&#13;
&#13;
x&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Literal&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Literal&#13;
&#13;
Literal&#13;
&#13;
Literal&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Inferential&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
Inferential&#13;
&#13;
Inferential&#13;
&#13;
Inferential&#13;
&#13;
General + Bespoke Listening Comprehension.&#13;
&#13;
&#13;
Materials: Pre and Post test.&#13;
&#13;
General Measure of Listening Comprehension. The following stories taken from the Neale Analysis of Reading Ability (NARA, Neale, 1997) were read to each participant in either the pre or post test: Toys, Tree house, Lost and Found, Road Safety. Toys or Lost and Found were used as practice tasks at the beginning of the pre/post test to help develop rapport. The stories were chosen so that the level of difficulty was consistent pre and post test. Comprehension questions that went with each story were asked at the end of the story to obtain the general measure of listening comprehension score. Table Two shows an example of a story and some of the questions given.&#13;
&#13;
&#13;
Table 2. Example of general listening comprehension story used.&#13;
&#13;
General Comprehension Example&#13;
&#13;
Sample of Questions asked&#13;
&#13;
My friend and I made a tree house. We like to hide in it. We climb up the rope and pull it up after us. Then no-one knows where we are. We play space-ships. At tea time we slide down fast and we are always first for tea.&#13;
&#13;
What would you say was the best name for that story?&#13;
&#13;
Who built the tree house?&#13;
&#13;
How did the children always manage to be first for tea?&#13;
&#13;
&#13;
Bespoke Measure of Listening Comprehension. The stories used in this study were a series of books about a dog named ‘Harry’ written by Gene Zion. These stories were chosen as they were first published between 1956 and 1965 and so were suitable for this age group but the children were not likely to be familiar with them. The pictures from the stories were scanned then printed on A4 sheets and laminated to make a set of wordless picture books. The original text was retained for each story, however, small sections of some of the stories were omitted to try and keep each story the same length.&#13;
&#13;
Two different types of questions were used in the bespoke measure of listening comprehension: literal and inferential questions. In the pre and post tests each child received eight literal questions and eight inferential questions after each story reading. The literal questions required the participants to recall facts from the text. The inferential questions tapped children’s ability to make inferences about information that was not stated explicitly in the text. These questions were designed to address: causality (why an event happened), emotions (how a character was feeling) and future events (what might happen next in the story). The Inferential questions in the pre and post test, however, consisted of four emotion and four causality questions as prediction questions could not be used at the end of the story. Table Three gives examples of literal and inferential questions used.&#13;
&#13;
Table 3. Examples of literal and inferential questions used.&#13;
&#13;
Extract&#13;
&#13;
Question&#13;
&#13;
1. Harry was a white dog with black spots who liked everything except having a bath. So one day when he heard the water running in the tub he took the scrubbing brush and buried it in the back garden.&#13;
&#13;
&#13;
&#13;
Literal: What did Harry bury in the back garden?&#13;
&#13;
Forced choice: the scrubbing brush/a sponge.&#13;
&#13;
Causal Inferential: Why do you think Harry buried the scrubbing brush in the back garden?&#13;
&#13;
Forced choice: Because the family told him to/Because he did not want a bath.&#13;
&#13;
2. That night Harry slept in the dog house – again.&#13;
&#13;
Literal Question: Where was Harry made to sleep again? In the Kitchen/in the dog house.&#13;
&#13;
Emotion Inference Question: How do you think Harry felt about sleeping in the dog house?&#13;
&#13;
Forced choice: happy/sad.&#13;
&#13;
&#13;
3. (After a sequence of events that lead to Harry being covered in seaweed and thinking the hot dog man was calling his name.) Harry still thought the man was calling his name. He barked and jumped with joy. He jumped so much that suddenly…&#13;
&#13;
Literal Question: (before - he jumped so much…) What was the hot dog man really shouting? Hurry/Harry&#13;
&#13;
Prediction Inferential question*: What do you think happened next?&#13;
&#13;
Forced choice: Everyone ran away/ the seaweed fell off him.&#13;
&#13;
&#13;
*Please note. These were only used during the intervention sessions.&#13;
&#13;
Materials: Intervention&#13;
&#13;
Three of the stories were used for the intervention sessions. Scripts were produced that incorporated the questions for the intervention sessions during the stories. In the inferential intervention group there were four of each question type: causal, emotion and prediction. The inferential and literal questions were always placed at the same point in the story.&#13;
&#13;
Procedure&#13;
&#13;
Phase One: Pre-test. Children in all groups completed the general listening comprehension measure and the bespoke measure of literal and inferential comprehension. Each child was tested individually in a quiet space away from the classroom. The pre-test session was audio and video recorded. The video recorder was set in front of the participant to capture their direction of eye gaze. The experimenter explained the task to the child and obtained verbal consent. In the pre-test the experimenter asked the child if they had heard any stories about Harry the dog while showing them the front cover. One child reported recognising the story, but could not remember any details.&#13;
&#13;
General Listening Comprehension Measure. Each participant was read two stories, the first acted as a practice task to help develop rapport. Immediately after each story the children were asked the comprehension questions for that story. If a child could not answer a question then the experimenter offered the correct response and moved onto the next question. If the child gave the incorrect answer then the experimenter did not highlight that this was incorrect but simply moved onto the next question. The decision to respond to answers in this way was based on the pilot of the procedure. This age group seemed to become easily disengaged if they supplied no answer on a number of occasions or incorrect answers and it was felt that this way of responding helped to maintain their confidence and interest in the task. Responses were scored as correct or incorrect. Acceptable answers were provided in the NARA manual.&#13;
&#13;
Bespoke Listening Comprehension Measure. After the assessment of general listening comprehension each child completed the bespoke listening comprehension task. The experimenter read out the story whilst the child followed the pictures in a wordless picture book version. At the end of the story sixteen questions were asked: 8 literal and 8 inferential, of which four were causal and four were emotion related. If the child could not answer a question or gave the wrong answer then s/he was offered a forced choice of two possible answers (examples in Table Three). One option was the correct target answer and one was incorrect. The forced choices were included in the pre/post test as they were also used during the intervention; however, answers based on a forced choice were not included in the analysis. In the pre and post-test if the child chose the correct response then the experimenter agreed with the child and moved onto the next question. If the child selected the incorrect option, the experimenter also continued with the next question. The decision was taken not to correct the child at this stage as if the child was still getting the answer incorrect despite assistance then giving them the correct answer may change the representation they had created of the story and also have an effect on their confidence as mentioned earlier. The forced choices were alternated so that the correct answer occurred equally in first and second positions across items. When scoring the responses if the child gave the correct answer unaided (i.e. without the forced choice option) then they were given one point. All other responses were scored zero.&#13;
&#13;
Phase two: Intervention (Intervention groups only). The intervention sessions took place the week after the selection phase, on three consecutive days. On each day, each child in the intervention groups was tested individually in a quiet space away from the classroom and the session was audio-recorded. Different stories were used in each session. As the stories were read to the participant they were asked questions (either literal or inferential depending on group assignment) about the story content. Children in the control group were not read to by the experimenter during this phase.&#13;
&#13;
Literal Questions Intervention Group. Children in this condition were read one story in each of the three intervention sessions and asked twelve questions that assessed their understanding of explicit details in the story, e.g., ‘What did the lady next door sing louder than?’ The questions were positioned throughout the text and related directly to information that had just been given in the story. If the children gave no response or an incorrect response they were offered the forced choice. If a child still gave an incorrect answer after being given the forced choices then the experimenter corrected them and offered the correct answer. This was to try and ensure that the children were building accurate representations as they listened to the stories.&#13;
&#13;
Inferential Questions Intervention Group. The same stories and question-response technique were used as outlined in the literal questions condition. Questions were also placed at the same position in the text, however, children in this condition were asked twelve inferential questions throughout each story that required them to think beyond the facts present in the text. In each story there were four causal inferential questions, e.g., ‘Why were Harry’s ears hurting?’ four prediction questions, e.g., ‘What do you think Harry did next?’ and four questions assessing understanding of the emotions of the characters. e.g., ‘How do you think Harry felt when the old lady told him to go away?’&#13;
&#13;
Phase three: Post-test. This session took place between five and seven days after the final intervention session and followed the same format as the pre-test. Children in all three groups completed the general listening comprehension story and the bespoke listening comprehension story with literal and inferential questions asked at the end of the story.&#13;
&#13;
Measure of Engagement. The video recordings from the pre and post-test were analysed for the children’s level of engagement. This was only based on the child’s behaviour during the reading of the bespoke listening comprehension story. The coding scheme used for this analysis is shown in Table Four. A second rater scored 20% of the pre-test videos. There was 100% agreement between raters.&#13;
&#13;
Table 4. Coding scheme used to analyse level of engagement while listening to the bespoke story.&#13;
&#13;
Code&#13;
&#13;
Description of Behaviour&#13;
&#13;
1&#13;
&#13;
Limited Engagement. The child appears off-task and makes a large number of unrelated comments or is distracted and looking away for a large part of the story reading.&#13;
&#13;
2&#13;
&#13;
Engaged- Quiet. The child looks at the pictures and listens well throughout the story but does not make any independent comments.&#13;
&#13;
3&#13;
&#13;
Engaged – Interactive. The child looks at the pictures and listens well throughout the story. They also make independent comments relating the events in the story to their lives/elaborate on the text/ ask questions about the text.&#13;
&#13;
&#13;
Group Assignment.&#13;
&#13;
Scores on the pre-test measures were used to assign the children to groups to ensure an equal range of scores in each. One-way Analysis of Variance was carried out on the general comprehension scores, literal and inferential scores. All F&lt;1.0 and all p&gt;0.1. In addition, where possible, an equal number of boys and girls were assigned to each group. Table Five shows the ages, number of boys and girls and pre-test scores for each group.&#13;
&#13;
Table 5. Distribution of gender, age and pre-test scores across groups.&#13;
&#13;
Variable&#13;
&#13;
Control&#13;
&#13;
Literal&#13;
&#13;
Inferential&#13;
&#13;
Gender Male&#13;
&#13;
Female&#13;
&#13;
8&#13;
&#13;
6&#13;
&#13;
7&#13;
&#13;
7&#13;
&#13;
7&#13;
&#13;
6&#13;
&#13;
Age (years; months)&#13;
&#13;
5;5&#13;
&#13;
5;5&#13;
&#13;
5;4&#13;
&#13;
General Comprehension (proportion)&#13;
&#13;
0.43&#13;
&#13;
0.46&#13;
&#13;
0.46&#13;
&#13;
Bespoke Literal (max=8)&#13;
&#13;
3.79&#13;
&#13;
3.79&#13;
&#13;
4.15&#13;
&#13;
Bespoke Inferential (max=8)&#13;
&#13;
5.0&#13;
&#13;
4.50&#13;
&#13;
4.77&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="863">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="864">
                <text>Pooley2010</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="865">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="866">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="867">
                <text>Project description</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="868">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="869">
                <text>Kate Cain</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="870">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="871">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="872">
                <text>43 children (23 boys, 20 girls, mean age 5 years 4 months and range 4 years 9 months to 5 years 9 months) in their first year of primary school</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="873">
                <text>Chi-squared&#13;
Mcnemar test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="23" public="1" featured="1">
    <fileContainer>
      <file fileId="5">
        <src>https://www.johnntowse.com/LUSTRE/files/original/fc27f6fa5aa3b5c2ec188de4cbeefc44.pdf</src>
        <authentication>2983d0be2c388322ede175f2da332d2c</authentication>
      </file>
      <file fileId="6">
        <src>https://www.johnntowse.com/LUSTRE/files/original/ae430f6c841f862e00a44f12d0df1e8a.pdf</src>
        <authentication>b9bd1185b1ff26c600843d03fd22e71c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="874">
                <text>Running Memory Span Development: The Input Mechanism and Hebb effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="875">
                <text>Yu Xie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="876">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="877">
                <text>It is unclear whether active strategy or passive strategy is used and whether the Hebb effect is elicited in the running memory task. The aim of this study was to explore the input mechanism and the Hebb effect in the running memory task via a developmental study. Children were asked to perform four working memory tasks: counting span task, free recall task, Hebb digit task, and running memory task. In order to explore the Hebb effect in the running memory task, the last three digits of every third list were repeated. The results suggested that running memory was a recency-based phenomenon and the Hebb effect is elicited in children. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="878">
                <text>running memory span development&#13;
input mechanism&#13;
Hebb effect&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="879">
                <text>The experiment was presented using SuperLab 4.0 on a Sony Laptop with a 14-inch colour screen. The responses of participants were recorded by the tester on answer sheets. Every child completed a counting span task, a free recall task, a Hebb digit task, and a running memory task.&#13;
Counting span task. The counting span arrays were developed from Towse and Hitch (1995) and consisted of equal number of target triangles and non-target squares. The target triangles were red, approximately 30 mm in length, and the non-target squares were blue, approximately 28 mm in length. The number of both target triangles and non-target squares varied from 3 to 9 (mean = 6). The counting span arrays were presented on the centre of the computer screen with a white background. The triangles and squares were randomly displayed at different positions in every display.  &#13;
Free recall task. For this task, 144 Chinese high-frequent two-syllable nouns (see Appendix A) were recorded by in a male’s voice at rate of 1 word per second. The words were recorded using Adobe Audition 3.0. Two practice lists and ten test lists were presented, and every list included 12 words at the rate of 1 word per second. The words were played by a computer.&#13;
Hebb digit task. All digit lists were created had the digits 1 to 9 in random order, avoiding any repetition of digits (see Appendix B). The voice of digits was recorded by Adobe Audition 3.0 at the rate of 1 digit per second. There were 2 practice lists and 24 test lists, and each list contained nine digits. Among the test lists, 16 lists were different, and the other 8 were the same – termed as Hebb list – presented on every third trial beginning on Trial 3. The 24 test lists were divided into 8 blocks, which involved 2 different lists and a Hebb list. &#13;
Running memory task. The lists included 12, 14, 16, 18, or 20 random digits from 1 to 9 (see Appendix C), which were recorded by voice. Two presentation rates were used in this task: 0.5 s per digit as the fast rate and 2.5 s per digit as the slow rate. In both conditions, there were 2 practice lists and 24 test lists. In order to test the Hebb effect in running memory task, the 24 test trials comprised 16 completely different lists, and 8 lists with the same last 3 digits which were the same and presented on every third trial. &#13;
Procedure &#13;
The experiment lasted 45 min, and every child completed 4 tasks. Each participant was seated on a chair in front of the computer screen, at a distance of 65 cm. All tasks included two practice trials for helping children be familiar with the procedure. Once children completed the practice trials and understood the procedure, they could proceed to the test trials. When children were performing the tasks, the experimenter gave no feedback about the accuracy of the words or digits. The order effect was counterbalanced as shown in the Table 1, which is a Latin Square design. Because there were two conditions in the running memory task, the fast speed and slow speed running, the tasks were counterbalanced. Therefore, in all, there were eight orders in the present study, and all children were equally divided into eight groups based on the eight orders. When participants completed each task, they were given sufficient time to rest. &#13;
Table 1&#13;
Task Orders for Four Tasks&#13;
Task&#13;
Orders&#13;
&#13;
a&#13;
b&#13;
c&#13;
d&#13;
e&#13;
f&#13;
g&#13;
h&#13;
Counting span task&#13;
1&#13;
2&#13;
3&#13;
4&#13;
1&#13;
2&#13;
3&#13;
4&#13;
Free recall task&#13;
2&#13;
1&#13;
4&#13;
3&#13;
2&#13;
1&#13;
4&#13;
3&#13;
Hebb digit task&#13;
3&#13;
4&#13;
1&#13;
2&#13;
3&#13;
4&#13;
1&#13;
2&#13;
Running memory task&#13;
4(FS)&#13;
3(FS)&#13;
2(FS)&#13;
1(FS)&#13;
4(SF)&#13;
3(SF)&#13;
2(SF)&#13;
1(SF)&#13;
Note. F = Fast-running memory task, S = Slow-running memory task.&#13;
Counting span task. The children were informed to the counting and recall tasks. Before every trial, a fixation symbol was displayed on the centre of screen for 0.5 s. When the target triangles and non-target squares were presented, participants were required to count the red triangles aloud, and repeat the final number. Once the children repeated the last number, the experimenter pressed the keyboard to show the next display, and the counting speeds were recorded by the computer automatically. There were three trials in every level and every trial included the n + 1 displays in level n. For example, participants counted 2 displays in level 1 and 3 displays in level 2. The final level was level 4, which contained 5 displays. After 2 to 5 displays, children were asked to report all the final numbers of red target triangles in the previous displays. If a child failed to recall correctly for at least two of the three trials, the counting span task was ended at that level; otherwise, they could progress to the next level. &#13;
Free recall task. Children were required to listen to some words, and repeat them as many as possible in any order, after the 12th word. The experimenter wrote down the responses of participants on answer sheets. If the children could not report a new word within 30 s, the experimenter would proceed to the next trial. &#13;
Hebb digit task. The procedure for the Hebb digit task was developed by Hebb (1961). Children were asked to listen to every list, and report all digits in the right order. Children reported the digits orally, and the experimenter recorded the response on an answer sheet. Because the running memory task also involved Hebb lists, 48 children were asked whether they were aware of any regular pattern in the digit tasks after they completed both Hebb digit task and running memory task. Only 5 participants noticed the repetition in the running memory and Hebb digit tasks.&#13;
Running memory task. Children were made to listen to some digits, different from those in the Hebb digit task; they were required to repeat the last three digits rather than all digits in the list. Two conditions were set to counterbalance the order effect: half of the children were administered the fast rate condition first and the other half were administered the slow rate condition first.&#13;
Scoring&#13;
Counting span task. Counting errors and counting speed were recorded and the scoring method used is the partial-credit unit scoring prescribed by Conway et al. (2005). Firstly, the correct items in each sequence were counted. If all items were correct in a sequence, this sequence was given one point. Otherwise, the score of a sequence was based on the proportion of correct items. Finally, the counting span of a participant was calculated as the sum the scores for all sequences. &#13;
Free recall task. The scoring method used was the one prescribed by Tulving and Colotla (1970), which involved the calculation of intratrial retention interval (ITRI). The ITRI value was the number of items between the presentation and the reported items. For instance, if the sequence is A, B, C, D, E, F, and G, and a participant reported G, F, and A. The ITRISs for the items were 0, 2, and 8, respectively. Before calculating the ITRI, the digit span of the Hebb non-repeating lists was calculated for every child. If the digit span of a child was 5, the item would be classified as a word from primary memory when the ITRI was 5 or less, whereas the item would be classified as a word from the secondary memory when the ITRI was 6 or more. &#13;
Hebb digit task. Every digit recalled correctly at the correct position was scored one point. The score of the non-repeating lists was the mean score of each non-repeating list, and the score of the repeating lists was the mean score of each repeating list. &#13;
Running memory task. The score for the running memory span was calculated using the mean number of digits in the right positions. If 3 digits were recalled in correct sequence, the score was 3; if the sequence of 2 digits (for example the first and second digit, the second and the third digit, or the first and third digit) was in the correct serial order the score was 2; if there was a single digit in the correct position, the score was 1. Similar to the Hebb digit task, the scores for non-repeating and repeating lists were separated.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="880">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="881">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="882">
                <text>Xie2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="883">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="884">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="885">
                <text>English&#13;
Chinese</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="886">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="887">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="888">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="889">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="890">
                <text>Developmental Psychology&#13;
Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="891">
                <text>Fifty-seven Chinese primary school students (23 female, 34 male), aged between 7 and 13 years (Mean = 9 years 6 months; SD = 1.754) took part in the present study. The children were recruited from Grade one to Grade six at Tianyi School in Xuancheng City</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="892">
                <text>ANOVA&#13;
t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="25" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="912">
                <text>The Effect of Sleep on the Processing of Emotional False Memories</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="913">
                <text>Chloe Newbury</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="914">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="915">
                <text>People often think they remember events and information that in fact never happened. In previous studies using the Deese-Roediger-McDermott (DRM) paradigm, participants viewed lists of semantically related words, and during testing were more likely to accept as seen words that were related to the lists but were actually unseen, indicating a false memory. Research suggests that sleep promotes this effect, as does the use of negatively valenced stimuli, although the effect of emotion is disputed. The current study investigated what effect emotion, in particular valence, has on false memory formation, and whether sleep promotes emotional false memories. Fifty participants were tested on their recognition performance using an emotional and neutral DRM paradigm after a 12-hour period of sleep or wake. As predicted, we found an increase in false recognition of negatively valenced lure words, as well as an overall effect of emotion, with emotional words leading to increased false recognition compared to neutral. We failed to replicate any sleep effect on performance accuracy of neutral or emotional memory, although the response time data indicates some effect of sleep on emotional memory performance. The quality of participants’ sleep and design of the current study are explored as possible explanations for this lack of a sleep effect. This study therefore indicates that emotion plays a significant role in the formation of false memories independent of sleep.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="916">
                <text>DRM&#13;
false memory</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="917">
                <text>Negative and positive DRM word-lists and critical lures were taken from Brainerd, Holliday, Reyna, Yang, and Toglia (2010) who controlled for other properties that are thought to affect false memory formation, including concreteness, meaning and frequency of words (Roediger, Watson, McDermott, &amp; Gallo, 2001). Neutral DRM lists and critical lures were taken from Stadler, Roediger, and McDermott (1999). Two separate lists were formed, one with negative and neutral words, and the other with positive and neutral words (see Appendix A for word-lists). Participants in both the positive and negative condition viewed the same five lists of neutral words, as well as ten negative or positive word-lists. &#13;
Mean valence and arousal scores for word-lists and critical lures were taken from the Affective Norms for English Words (ANEW) (Bradley &amp; Lang, 1999). Independent samples t-tests showed that positive words had significantly higher ratings of valence than negative t(11.41) = 7.42, p &lt; .001, and neutral words, t(13) = 7.43, p &lt; .001. Negative words had significantly lower ratings of valence than neutral words, t(13) = 2.31, p = .038. Furthermore, negative and positive word-lists did not significantly differ in terms of arousal, t(12.92) = 0.52, p = .613, however neutral words had significantly lower ratings of arousal than positive, t(13) = 2.67, p = .019, and negative words, t(13) = 4.87, p &lt; .001. It was also important that word-lists were controlled in terms of frequency and BAS. Frequency scores were taken from the MRC Psycholinguistic Database (Coltheart, 1981). Independent samples t-tests showed no significant difference in frequency ratings between negative and positive word-lists, t(18) = 0.18, p = .816, positive and neutral word-lists, t(13) = .35, p = .735, and negative and neutral word-lists, t(13) = 0.50, p = .624. BAS ratings were taken from the University of South Florida Free Association Norms (Nelson, McEvoy &amp; Schreiber, 1998). There was no significant difference in ratings of negative and positive words, t(18) = 4.92, p = .629, positive and neutral words, t(13) = 0.32, p = .757, and negative and neutral words, t(13) = 0.89, p = .391. (See Appendix B for mean ratings). &#13;
For critical lures, independent samples t-tests showed that positive lure words had higher ratings of valence than negative lures, t(15.11) = 11.20, p &lt; .001, and neutral lures, t(11) = 4.24, p = .001. Negative lures had significantly lower ratings of valence than neutral lures, t(11) = 3.62, p = .004. There was no reliable difference between ratings of arousal for negative and positive lures, t(18) = 0.22, p = .828, positive and neutral lures, t(11) = 1.08, p = .305, and negative and neutral lures, t(11) = 1.62, p = .134. There was no reliable difference between frequency ratings of negative and positive lures, t(18) = 1.14, p = .268, positive and neutral lures, t(13) = 0.55, p = .593, and negative and neutral lures, t(13) = 1.11, p = .287. (See Appendix B for mean ratings).&#13;
During testing, participants viewed 60 words in total; two previously seen from each DRM list (total of 30), the critical lure associated with each list (total of 15), and an unrelated word for each list (total of 15). Unrelated words were taken from lure words of unused DRM lists, as well as from Kousta, Vinson, and Vigliocco (2009), who developed emotional and neutral word-lists using the ANEW database. Unrelated words were matched to DRM word-lists in terms of valence, resulting in five unrelated neutral words, ten unrelated negative words and ten unrelated positive words. All words were presented in Courier new bold, black font, lower case and in 18-point. &#13;
Participants in the sleep condition were required to wear an actigraph sleep monitor to more accurately measure their time spent asleep and the number of awakenings. All participants were given a questionnaire before each session to collect data on sleep habits, caffeine and alcohol intake (see Appendix C), and those in the wake condition were instructed not to nap throughout the day. &#13;
Procedure&#13;
Participants were randomly allocated to either the wake or sleep group, with those in the wake group trained on word-lists at 9am and tested on the same day at 9pm. Those in the sleep group took part in the training session at 9pm, and were tested the following day at 9am. Participants were randomly allocated to the negative or positive stimuli condition. &#13;
During the training session, participants were first asked to fill out a questionnaire to assess sleep habits and caffeine and alcohol intake. Participants were then required to sit approximately 60cm from the computer screen, and were presented with 15 lists of 12 words presented one word at a time in the centre of the screen. They were first presented with a fixation point for 500ms before the words from one list were presented for 1500ms each. After each list participants were presented with three maths problems to solve for 1000ms each as a distractor task, in order to prevent participants from rehearsing words they had seen. Maths problems were presented in a random order for each participant, and each problem was only presented once throughout the task. After the three maths problems were presented, the fixation cross reappeared and participants were given another list to remember. The order of word-lists was randomised, and the order in which each word in a list was presented was also randomised. &#13;
Participants were then asked to return 12 hours later after a period of daytime wakefulness or overnight sleep. During the second session, participants first viewed a fixation cross for 500ms, and then the test words were presented to participants one at a time in the centre of the screen for 120ms. Participants were required to identify whether they thought they had seen the word in the previous session or not. They did this through the press of a key on the keypad, with a press of zero corresponding to an old word (previously seen), and one corresponding to a new word (previously unseen). The numbers zero and one on the keypad were labelled ‘old’ and ‘new’ respectively, to aid participants. Participants were not given a response deadline. Participants then saw the fixation point again 500ms after giving their response, before another word appeared on the screen. All words were presented in random order. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="918">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="919">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="920">
                <text>Newbury2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="921">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="922">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="923">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="924">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="925">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="926">
                <text>Padraic Monaghan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="927">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="928">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="929">
                <text>Fifty participants (32 female, 18 male) with a mean age of 25.10 (SD = 9.25, range 18 to 62) took part in the study for course credit or as a volunteer</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="930">
                <text>4-way mixed analysis of variance (ANOVA)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="27" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="950">
                <text>The Effects of Schema-typical and Atypical Contexts on Memory for Brand Names of Products</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="951">
                <text>Thanita Soonthoonwipat</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="952">
                <text>2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="953">
                <text>The memory for an advertisement can be affected by the way it is constructed. In general, the more distinctiveness, the better memory performance. Traditionally, it has been assumed that the whole memory episode will be better remembered if it is featured by any odd element(s) because it is more attention-demanding and creates stronger memory traces. However, recent evidence suggests that the distinctiveness effect might not spread to everything; it might only affect those distinctive elements without necessarily affecting their linkages with other elements. Accordingly, regarding the advertisements, the memory for each element can be diverse. We manipulated the distinctiveness effect by composing products with schema-typical contexts (undistinctive condition) and schema-atypical contexts (distinctive condition). Participants observed 20 advertisements; 10 were schema-typical and another 10 were schema-atypical. They then completed recall and recognition tests which allowed us to explore how far the distinctiveness effect could extend. We found that only product recall and recognition in the schema-atypical condition were robustly enhanced, other variables were not significantly affected. These findings went against the traditional view and conform with the recent research. We discussed that, in the schema-atypical condition, the products and their contexts made each other distinctive, hence, they were better remembered. In contrast, the brand names and product-brand bindings were schema-neutral, thus, they did not receive more attention and not better remembered. The results were further interpreted to form some practical implications that improve advertising effectiveness.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="954">
                <text>Distinctiveness effects&#13;
Schema&#13;
Memory&#13;
Product recall&#13;
Product recognition&#13;
Brand recall&#13;
Brand recognition&#13;
Product-brand binding</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="955">
                <text>The stimuli were 40 newly constructed print advertisements (in digital format). Print advertisements were employed because they allow the better experimental control (Keller, 1987). A half of these advertisements belonged to toiletries category (i.e. shampoo. sunscreen, and toothpaste), whereas another half belonged to foods category (i.e. pizza, sandwiches, and fried chicken). For each category, there were 10 types of products. For each product, there were two versions of its advertisement; schema-typical and schema-atypical (but only one of which was viewed by each participant). The schema-typical advertisements referred to the ones in which the product was bound with an expected context (i.e. a toothpaste appearing a bathroom scene), while the schema-atypical advertisements referred to the ones in which the product was bound with an unexpected context (i.e. a toothpaste appearing in a bedroom scene). &#13;
In terms of the stimuli construction, there were three key elements for all advertisements, the first of which was the product, the second was the background or the scene illustration which was considered as the context of that advertisement, and the last element was the brand name. The first two elements were to form advertising pictures, and all together with the third one were to form complete advertisements. The researchers purchased stock images from Shutterstock website (https://www.shutterstock.com). The images purchased (product shots, backgrounds, and decorative elements) were then retouched and composted into the print advertising pictures using Adobe Photoshop (Adobe Photoshop CC 2015). All the advertising pictures were controlled not to include any text so that the only copy presented in each advertisement was its brand name. In respect of brand names, we invented new brand names for all 20 products. Each brand name was controlled to be easily pronounceable. They were names of between one to three syllables i.e. Hans, Raven, and Moana. The brand names, texts in Candara 48-point type, were placed on top of every advertising picture. Figure 1 shows examples of stimuli. Table 1 shows the List of products, brand names, their schema-typical contexts, and their schema-atypical contexts. The illustrations of all 40 advertisements can be found in Appendix A.&#13;
&#13;
&#13;
&#13;
Figure 1. Examples of stimuli&#13;
&#13;
&#13;
&#13;
&#13;
Table 1 &#13;
List of products, brand names, their schema-typical contexts, and their schema-atypical contexts&#13;
&#13;
Product&#13;
Brand name&#13;
Schema-typical context&#13;
Schema-atypical context&#13;
Toiletries Category&#13;
&#13;
&#13;
&#13;
1&#13;
Soap&#13;
Flounder&#13;
Bathroom&#13;
Garden&#13;
2&#13;
Shower gel&#13;
Naveen&#13;
Bathroom&#13;
In the bus&#13;
3&#13;
Deodorant&#13;
Megara&#13;
Bathroom&#13;
Library&#13;
4&#13;
Perfume&#13;
Attina&#13;
Bedroom&#13;
Street &#13;
5&#13;
Sunscreen&#13;
Moana&#13;
Beach&#13;
Kitchen&#13;
6&#13;
Shaving cream&#13;
Hans&#13;
Bathroom&#13;
Office&#13;
7&#13;
Toothpaste&#13;
Pongo&#13;
Bathroom&#13;
Bedroom &#13;
8&#13;
Talcum powder&#13;
Fauna&#13;
Bathroom&#13;
Beach &#13;
9&#13;
Shampoo&#13;
Rolfe&#13;
Salon&#13;
Forest&#13;
10&#13;
Lipstick&#13;
Armoire&#13;
Office&#13;
Cooking table&#13;
Food Category&#13;
&#13;
&#13;
&#13;
11&#13;
Sandwich&#13;
Duchess&#13;
Kitchen&#13;
On the stairs&#13;
12&#13;
Fried chicken&#13;
O’Malley&#13;
Kitchen&#13;
Yoga room&#13;
13&#13;
Yogurt&#13;
Rialey&#13;
Kitchen&#13;
In the bus&#13;
14&#13;
Energy bar&#13;
Gaston&#13;
Sport field&#13;
Bedroom&#13;
15&#13;
Pizza&#13;
Linguini&#13;
Restaurant&#13;
Bathroom&#13;
16&#13;
Pasta&#13;
Tony&#13;
Kitchen&#13;
On the bed&#13;
17&#13;
Soup&#13;
Perdita&#13;
Kitchen&#13;
Gym&#13;
18&#13;
Raw burger&#13;
Gus&#13;
Kitchen&#13;
Study room&#13;
19&#13;
Ice-cream&#13;
Bo Bo&#13;
Street&#13;
Library&#13;
20&#13;
Fresh fruit&#13;
Raven&#13;
Garden&#13;
Bathroom&#13;
In addition, there was an effort to provide the variability of context for both schema-typical and schema-atypical advertisements. To illustrate, for the schema-typical advertisements, regarding the advertisements of toiletries category, from the total of 10 products, six of them were bound with a bathroom scene as their schema-typical context, while another four products were bound with other different schema-typical contexts (i.e. a beach scene for sunscreen). Similarly, for foods category, six products were bound with a kitchen scene as their schema-typical context, while another four products were bound with other different schema-typical contexts (i.e. a restaurant scene for pizza). Furthermore, for the schema-atypical advertisements, all 20 products had their own different schema-atypical contexts. For example, a forest scene was for shampoo, while a Yoga room was for fried chicken. Consequently, despite the effort to make the context of schema-typical advertisements more varied, there was probably more variability for the schema-atypical ones.&#13;
Moreover, regarding the judgement of schema typical or atypical context, it was initially set up based on researchers’ perspective. Then, a pilot study was conducted on five participants where they were asked to judge whether the contexts were schema-typical or atypical for a particular product. All five participants judged each context to be typical and atypical as judged by the researchers, for all products listed. &#13;
Furthermore, we constructed some additional materials to be used in the recognition test which were 20 foils of similar product images and 20 foils of similar brand names. As for the foil product images, we purchased another set of stock images (product shots and decorative elements) to be retouched and composted into another 20 product images as icons in isolation. Each foil was designed after one of the target product images, for example, we constructed the foil image of a toothpaste tube to be paired with the target image of a toothpaste tube. These two images were controlled to look similar in terms of product type and size, but different regarding the product design (packaging and colour scheme). As for the foil brand names, we further invented 20 similar brand names, 10 for toiletries category and another 10 for foods category. All foil brand names were controlled to have the same characteristics as the target brand names; names of between one to three syllables which were easily pronounceable. &#13;
Design and data analysis strategy&#13;
The overall design and the variables. A repeated measures design was employed in this study. The within-subjects independent variable was the advertising context which consisted of two levels; schema-typical and schema-atypical. There were six dependent variables examined in separate analyses. The first three variables were from the recall test including the percentage of correctly recalled products (product recall), the percentage of correctly recalled brand names (brand name recall), and the percentage of correctly recalled product-brand bindings (product-brand binding recall). The first two variables were simply calculated from the number of correct answers divided by the total number of advertisements of each level. These variables were to answer whether the performance of products and brand names recall would be better if the advertising contexts were different from their typical schemas. For the third variable, the product-brand bindings recall, it was calculated based on the number of correctly recalled sets (which were counted when the products were written together with their matching brand names) divided by the number of correctly recalled products. Hence, this third variable was to explore that when people recall the products, how much would they extend their memory to the brand names. &#13;
Likewise, the other three dependent variables were from the recognition test including the percentage of correctly recognized products (product recognition), the percentage of correctly recognized brand names (brand name recognition), and the percentage of correctly recognized product-brand bindings (product-brand binding recognition). Similarly, the fourth and fifth variables were calculated by dividing the correct answers by the total number of advertisements of each level. These variables were to answer whether the performance of products and brand names recognition would be better if the advertising contexts were different from their typical schemas. Also, for the sixth variable, the product-brand bindings recognition, it was calculated based on the number of correctly recognized sets (which were counted when participants picked the right choices of product images and their matching brand names concurrently) divided by the number of correctly recognized products. Hence, this last variable was to explore that when people recognize the products, how much would they extend their memory to the brand names. &#13;
Presentation phase. In terms of experimental design, firstly, 20 advertisements were presented to participants. For counterbalancing purpose, 32 participants were equally divided into four groups (eight participants in each). Each group was bound with a different set of advertisements. Each set consisted of 20 advertisements, 10 from toiletries category and another 10 from foods category. From 10 toiletries advertisements, half of them were the schema-typical advertisements and another half were schema-atypical. From five schema-typical advertisements, three of them had a bathroom as their context, and another two had other typical contexts. The arrangement mentioned above was also applied to the foods category advertisements; three schema-typical advertisements were bound with a kitchen scene, another two schema-typical advertisements were bound with other schema-typical contexts, and five different schema-atypical advertisements. Appendix B shows four different sets of stimulus. However, the actual orders of advertisements presented to participants were not the same as shown in the Appendix B, as all 20 advertisements in each set were then randomly mixed. Hence, the positions of advertisements were different in each set to minimize the order effect. Additionally, all the advertisements were presented on a laptop screen (13-inch MacBook Air) and each of them was shown for 10 seconds, using a timed PowerPoint display.&#13;
After the presentation of stimuli, there was a distractor task for two minutes. Immediately after the two-minute interval, participants were administered a free recall test followed by a recognition test. In addition, to achieve the most appropriate study design, prior to the establishment of the final experiment procedure, we ran a small pilot study to determine a suitable memory interval (the duration of the distractor task). We had two participants (two females, mean age = 25 years) do the pilot study which 10-minute interval was employed, and we found that it led to a ceiling effect for product recognition but a floor effect for brand name recall and recognition. Therefore, we decided to cut down this interval to only two minutes.&#13;
Test phase. For the free recall test, participants were asked to write down every product and brand name which they could remember in the answer sheet. Figure 2 shows the presented slide for the recall test. For the recognition test, we separated it into two subsections; the toiletries subsection and the foods subsection. In each subsection, there were 10 questions referring to all 10 products in that category. Thus, there were the total of 20 main questions in this recognition test. The questions were also presented on the same laptop screen (13-inch MacBook Air). The toiletries-category questions were presented first, followed by the foods-category questions. &#13;
&#13;
Figure 2. The PowerPoint slide used in the recall test&#13;
In respect of recognition test construction, for each question, there were two sub-questions; product question and brand name question. For each product question, there were two choices (A and B) which included the target image of product and the foil of similar product. The right answers were randomly varied between A and B throughout the test. Besides, for each brand name question, there were 20 choices (1 to 20) which include 10 target brand names and 10 foils of similar brand names. For each category, the right answers were different for every brand name question and randomly varied between odd (1, 3, 5, etc.) and even (2, 4, 6, etc.) choices throughout the test. Figure 3 shows examples of recognition-test questions. All the questions can be found in the Appendix C. &#13;
  &#13;
  &#13;
Figure 3. Examples of PowerPoint slides used in the recognition test&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="956">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="957">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="958">
                <text> Soonthoonwipat2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="959">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="960">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="961">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="962">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="963">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="964">
                <text>Adina Lew</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="965">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="966">
                <text>Psychology of Advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="967">
                <text>There were 32 participants (18 females, mean age = 26.21 years, range 18-35 years). Eight of them were native speakers of English, while others had English as their second language</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="968">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="29" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="988">
                <text>Competence and Warmth: How Gender Impacts Perceptions of Male and Female Speakers.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="989">
                <text>Jayne Summers</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="990">
                <text>2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="991">
                <text>Using the stereotype content model as a theoretical background, this study aimed to investigate the relationship between gender stereotypes and judgements of warmth and competence. Visual appearance has long been used to research these judgements while auditory cues have often been overlooked. This study therefore focused on judgements made about voice and subsequently did not influence participants with predetermined gender labels. 61 participants – aged 19 to 60 – listened to either 2 male or 2 female speakers talk about domestic violence and cancer research. Domestic violence is here defined as a women-centric topic, while cancer research is considered gender neutral. Participants completed person perception inventories of each speaker, rating them on 7-point Likert scales in terms of 10 competence and 10 warmth items. They also completed a sexism inventory to determine whether sexism predicted a more favourable attitude toward male speakers. A 2 between gender (male vs female) by 2 within topic (domestic violence vs cancer research) ANOVA was conducted, and female speakers were judged as more competent than males when speaking on domestic violence but not cancer research. They were considered warmer than men in both cases. This indicates that women are seen as competent when speaking on issues that directly affect them, suggesting that they should be taken more seriously when speaking out about their own rights. However, traditional warmth stereotypes regarding women were upheld. This, along with further implications, are discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="992">
                <text>gender&#13;
stereotypes&#13;
competence, warmth&#13;
stereotype content model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="993">
                <text>Items. 10 competence items and 10 warmth items were selected to compile a 20-item list of characteristics for participants to judge speakers on. Of these items, 11 were taken from Rudman &amp; Glick (1999) and the remaining 9 were considered in the original SCM. Items used in the competence and warmth scales were found to be reliable across speech topics, namely cancer&#13;
research (CR) and domestic violence (DV) (competenceCR α = .893, competenceDV α = .931). This indicates that the scales used were highly reliable. Similarly, for the warmth dimensions, Cronbach's Alpha was suitably high (WarmthCR α = .918, WarmthDV α = .944). The reliability for the sexism inventory was also acceptable, with an α value of .826. An example of several competence and warmth dimensions can be seen below, while a full list can be found in Appendix A. Competence: confident, ambitious, intelligent.. Warmth: trustworthy, likeable, supportive.&#13;
Speeches. Two speeches were recorded for the purpose of the experiment, one focused on domestic violence and the other on cancer research. The speeches were written to closely match each other in terms of wording and the information being presented. For instance, the opening and closing sentences of each speech were similarly structured, as seen below.&#13;
Table 1. Examples of speech text.&#13;
Domestic Violence Cancer Research&#13;
Opening sentence &#13;
Domestic Violence. A topic that is often glossed over as something that effects other people - not me; not you. &#13;
Cancer. A topic we don't often like to think about – something that effects other people, but not me: not you.&#13;
Closing sentence &#13;
By going to our website www.dvrefuges.co.uk you can find out more information about the great work women's refuges around the country do, and help them continue to change women's lives by donating to our cause. &#13;
By going to our website www.ukcancer.co.uk you can find out more information about the great work that we do, and by donating to our cause, help us continue to help people diagnosed with cancer live a normal life.&#13;
The details of the speeches differed, and the content was varied enough so as not to be obviously the same to participants, but the speeches were largely similar, as can be seen in Appendix B.&#13;
Four speakers were responsible for recording the two speeches, a male and female speaker for each topic. This allowed participants to hear both speeches either spoken by two male or two female speakers. All four speakers were from the same region and had northern accents, however, two speakers' accents differed slightly from the remaining two, which may have been particularly noticeable to northern participants. To account for this, one speaker with each accent was assigned to each topic condition and so any accent effects were counterbalanced and can be assumed to not have influenced judgements.&#13;
Speeches were recorded using an iPhone 6 microphone and edited using Audacity in order to eliminate background noise and static. Recordings were then given a plain video image of a black background with text reading either 'Recording One' or 'Recording Two' respectively. Due to the fact that recordings were counterbalanced across conditions, all four recordings were presented either as first or second in at least one condition, so in total 8 versions of the recordings were made and embedded into Qualtrics, where the body of the survey was hosted. Participants listened to recordings using Sony headphones during the experiment.&#13;
Procedure&#13;
Participants were assigned to one of four conditions. In each condition they were asked to listen to the first speech, either domestic violence or cancer research, spoken by either a male or female speaker. After listening to the speech, they proceeded to the next online page and completed the speaker evaluation, rating the speaker on the 20 warmth and competence&#13;
dimensions. This was indicated by how well they believed each item fit the speaker by choosing a point on a 7-point Likert scale (1 = completely disagree, 7 = completely agree). Following this they listened to the second speech spoken by a different speaker of the same gender. They then completed the same speaker evaluation for the second speaker. Finally, they completed the sexism inventory (The Ambivalent Sexism Inventory, Glick &amp; Fiske, 1996) which measured the participants' explicit sexist attitudes on a 5-point Likert scale (1 = strongly agree, 5 = strongly disagree). A copy of the items in this inventory can be found in Appendix C. As this was a 2 (gender: female vs. male) x 2 (topic: domestic violence vs. cancer research) experimental design with repeated measures on the second factor, the difference between each condition was purely the order in which the speeches were presented (domestic violence first or second) and the gender of speaker that each participant heard (male or female) for the purpose of counterbalancing. So as not to influence participants to respond in a set way, the experiment was presented as regarding the evaluation of speakers and not as explicitly about gender.&#13;
Following the main section of the experiment, participants were asked a number of questions regarding how they experienced the recording, the first of which was answered on a 5-point scale (1 = strongly agree, 5 = strongly disagree). The question was: 'how likely are you to visit the website mentioned in this speech.' This was relevant in order to measure whether the competence of the speaker affected the likelihood of the participant to engage with the issue. Importantly, participants were also asked whether they considered each topic to be masculine or feminine, again measured on a 7-point scale (1 = feminine, 4 = neither feminine nor masculine, 7 = masculine). This was included in order to provide validity to the assumption that the domestic violence topic would indeed be judged as more women-centric, and the cancer research topic would be neutral. It is therefore of note that over 50% of participants considered domestic&#13;
violence to be a feminine topic, others considered it gender neutral, but very few considered it a masculine topic. The majority of participants judged cancer research as gender neutral, as was intended.&#13;
Finally, participants were asked whether or not they had any experience of the topic at hand, either personally or from a friend or family member, as this may have caused them to make more favourable judgements towards the topic they were more invested in. Participants also gave their gender, nationality and age. Gender and nationality were exploratory variables of particular interest due to the belief that other women may be more likely than men to evaluate women as competent. Nationality was of interest due to the fact that people from other cultures, particularly Eastern cultures, have different gender roles than we do in the UK, and so their responses during the experiment may have reflected this. Once the experiment was complete participants were fully debriefed and had the chance to enter a competition to win a prize in return for their participation.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="994">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="995">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="996">
                <text>Sumners2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="997">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="998">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="999">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1000">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1001">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1002">
                <text>Tamara Rakic</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1003">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1004">
                <text>Social Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1005">
                <text>61 participants (14 male, 41 female, and 6 non-binary people) with an age range from 19 to 60 (M= 24.95, SD =9.63), were recruited through opportunity and snowball sampling</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1006">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="31" public="1" featured="1">
    <fileContainer>
      <file fileId="84">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d9ec28d2595cae82a23d00f217468f9b.doc</src>
        <authentication>0b3f1388984a2d5a7508900b80476211</authentication>
      </file>
      <file fileId="85">
        <src>https://www.johnntowse.com/LUSTRE/files/original/74fc7eead2f61c385212a7bae93eff2a.txt</src>
        <authentication>d6d530c5d70a86ab26cc60e890ba0a43</authentication>
      </file>
      <file fileId="86">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e05a0d5300b408575310f0f4b2cd424b.csv</src>
        <authentication>8dd217dfaef24c4c9a41f8b2ee5a1738</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1024">
                <text>Training Transfer Between False-belief, Card Sorting and Counterfactual Reasoning in Children with ASD.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1025">
                <text>Amna Ahmed</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1026">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1027">
                <text>Previous training studies for typically developed (TD) children and children with Autism Spectrum Disorder (ASD) show that theory of mind and executive functions are two interrelated domains, and that training in one task could lead to improvement on the other. This training study aimed to examine the developmental relationship between three domains (Theory of Mind (ToM), Executive Functions (EF) and Counterfactual Reasoning (CR)) in children with ASD. A group of 30 children diagnosed with ASD were randomly allocated to one of three training groups, each group received training in one of the three domains stated. After training, the entire sample was tested to measure for improvements. Results indicate that ToM training leads to improvement on the EF and CR tasks, while EF training did not lead to ToM improvement and CR training did not lead to EF improvement. Findings are discussed and a novel cognitive model is proposed to account for the observed outcomes. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1028">
                <text>ASD, Training study&#13;
Domain general&#13;
Theory of Mind&#13;
Counterfactual reasoning&#13;
Executive Functions</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1029">
                <text>Following the design of Kloo and Perner (2003), first children were pretested. The pretest involved measures of verbal and nonverbal ability, two false-belief tasks followed by a card sorting task and two counter-factual reasoning tasks. The pretest was scored to create a baseline for the participants' abilities in each of the areas assigned to the training groups. Children were then randomly assigned to one of three experimental training groups. Each group was given two sessions of training (approximately 1 week apart) on one of the three areas; false belief, counterfactual reasoning or DCCS. A posttest was given a week after the second training session, it was similar to the pretest in design but different materials were used. The posttest was given to the children to measure any improvements in performance after training and examine any crossover effects between the different training groups. Finally, the children were given a follow-up test (approximately 6 weeks after the posttest) to investigate if the effects of training are lasting. All of the sessions took place in a quiet room in the child's school.&#13;
&#13;
            Procedure and Materials&#13;
Pretest and posttest. Both sessions that preceded and followed the training sessions involved tasks measuring performance in false belief, counter-factual reasoning and card sorting.&#13;
False-belief. One of two traditional unexpected transfer tasks was administered on the pretest based on Wimmer and Perner (1983), modeled after Baron-Cohen et al.’s Sally-Anne task (1985). A scene was enacted to the child using wooden toy figures and a kitchen model in which an item is unexpectedly transferred during the protagonist's absence. The stories where altered slightly to be more fitting to the knowledge of a Bahraini child by changing character names and making other alternations where appropriate. However, the main consciences of the stories remained very similar to the original stories. After the story is told, the character returns to the scene and the child is then asked a false-belief test question such as 'where do you think Ahmed will look for his teddy bear now?' followed by two control questions (memory and reality). One of the two stories was administered in the pre-test and the other in the post-test. &#13;
The false-belief pretest and posttest also included an unexpected content task, another task modeled by Wimmer and Perner (1983) as a measure of false-belief. In this task the child was presented with a closed familiar container (such as a Band-Aid box) and then the child was asked to guess the content of the box. The item in the box was then revealed to the child (a coin, for example). Next the item was placed in the closed box again and the child was asked 'what did you think was in the box before I opened it?' The correct answer should be Band-Aids, but most children with ASD find difficulty in suppressing the reality of what they know to be in the box so the answer they give is ‘a coin’. The child was then asked about another person’s state of mind 'what will (name another child) think is inside the box?’ Finally, the child was asked a memory control question 'what is really in the box?' &#13;
&#13;
Card Sorting. Following the false-belief task, the child was presented with a dimensional change card sorting task (DCCS; Frye et al., 1995). One set of cards (5cm x 10cm) was used as well as two target cards (a blue house and an orange car) to be placed on two sorting boxes (12cm x 16cm). The card set had 12 testing cards (6 orange houses and 6 blue cars). The task involved two phases, in the pre-switch phase the participant was asked to sort the cards according to shape. After completing six trails successfully, the examiner explained to the child that now the rules of the game will change and the child was asked to sort the cards according to colour rather than shape in the post-switch phase. &#13;
Counterfactual Reasoning. Lastly, the pretest and posttest sessions included two counterfactual thinking tasks based on Beck et al. (2011). One of the tasks in each session was enacted using wooden figures and materials such as doll sized bed, cabin, teddy bears or pets. The second task was presented using a picture story consisting of three panels illustrating the events of the story. In these stories, both enacted and illustrated, a series of events lead to a specific end state. For example, the character picks flowers from the garden and places them in a vase on the table. Then the child is asked 'if Zainab had not picked the flowers where would they be’? Two control questions (memory and reality) followed. Similarly to the false-belief task, some alterations where made to the stories where appropriate to accommodate the child's environment and imagination.  The use of two different methods of delivery for the counter-factuality task was introduced to create more variation in the understanding of counterfactual reasoning and to distinguish this task from the false-belief task. &#13;
Training&#13;
Following the pretest, the participants were assigned to three experimental groups each receiving two training sessions in one of the three areas; false-belief, counterfactual reasoning and DCCS. The aim of the training is to provide the children with explanations and feedback based on performance. &#13;
False-belief training group. In each of the training sessions, the false belief group received two of four Ernie-says-something-wrong tasks (renamed to Ali-says-something-wrong) (Hale &amp; Tager-Flusberg, 2003), one unexpected transfer task different from the tasks administered during the pre and post-test sessions, and finally one unexpected content task. &#13;
Ali-says-something-wrong. As in the original Kloo and Perner (2003), the task was presented with the aid of three puppets. In each of the stories Ali carried an action towards one of the puppets but then stated that he did it to another puppet. In each training session the child received two of the four original stories followed by a question about the content of Ali's statement and about the conflicting reality. The other two stories where then administered in the following session.&#13;
Unexpected transfer. The training sessions also included one story about an item being unexpectedly transferred in the protagonist's absence following Baron-cohen et al. (1985). The stories was enacted using wooden dolls and doll house furniture. This training task aimed to teach children about the main aspects of an unexpected transfer and to gradually guide them towards considering the character's false belief (Kloo and Perner, 2003). &#13;
Unexpected content. This task is presented using a different box and content for each test and training session. Examples of the materials used are a smarties tube, a pringles box, a crayons card box. The training of this task aimed to help the child understand his own false-belief as well as others’ state of mind.  &#13;
&#13;
DCCS training group. The card sorting group was given training in two DCCS tasks in each of the training sessions. Both tasks involved sorting according to colour and number, and the switch was always from colour to number. The two tasks administered were the three dimension switch and the transfer sorting task. &#13;
Three dimension switch. In this card sorting task, the participant was presented with two target cards (one yellow house and two green houses) placed on a sorting box. The test cards were similar to the target cards on one dimension; either colour or number (two yellow houses, one green house). The child had to sort by colour, then number, then by colour again and finally by number one last time. Two sets of cards were used, one for each training session. The experimenter helped the child identify each dimension after each switch was made and the rules of the game were covered again. Each switch involved six trials. &#13;
Transfer sorting task. Here, the target cards remained the same as the previous task (one yellow house and two green houses) but a new test card that is only similar to the target cards on one dimension (two yellow cars) was introduced. The test cards was supposed to be sorted according to the dimension stated by the experimenter, starting with colour then switching to number.&#13;
&#13;
Counterfactual reasoning training group. Counter-factual reasoning tasks and false-belief tasks are interchangeable in some studies by asking questions testing both skills following a single story. However, in this study, the training groups had to receive different stories, followed by questions that only tap on counterfactual thinking in order to distinguish it from false-belief training. The purpose of this divide in training is to ensure that each experimental group receives training that does not overlap with the other groups' as the study aims to ultimately measure the crossover effects. The CR group received two tasks in each training session. Like the pretest and posttest, one of the tasks was enacted using figures and the other was presented as a picture story. The stories are based on Beck et al. (2011) and Guajardo and Turley-Ames (2004).&#13;
Figure stories. Following Guajardo and Turley-Ames' (2004) counterfactual thinking tasks, the children were shown a story, presented using wooden dolls, in which an event occurs (usually as a consequence of an action taken by the protagonist) and the child was asked to generate alternative scenarios that would have prevented the occurrence of that event. For example, the character is drawing a picture using pencil colours when the colour breaks and a result he cannot finish his drawing. The question following this story is 'what could the character have done so that he would have drawn the rest of the picture?' and the child is to give as many responses as he/she can generate. Other scenarios include avoiding breaking a glass, keeping their clothes clean, taking a nap leading them to miss their favorite show, and someone eating the character's last chocolate bar. In the training sessions, the examiner walks the child through the logic of having different actions leading to alternative endings. &#13;
Picture stories. The second task in the counter-factual training involved a single picture story based on Beck et al (2011). The images were digitally drawen using Adobe Illustrator and the stories showed a sequence of three square panels. However, the question format following the stories differed from the task given using figures. In the picture stories task, the child is presented with a simple story of consequential events followed by a question about where someone or something would have been if a certain event had not occurred. For example, one of the stories showed a cat napping on top of a car, the cat then spies a bird flying by and chases the bird all the way to the traffic light. The question associated with this story is 'if the cat had not spied the bird, where would the cat be?' Similar illustrations include a man receiving a call to meet a friend, a girl picking flowers, a drawing flying out of an open window and a man who gets sand on his shoes. The training aims to allow the child some insight on how an occurrence could alter the course of events resulting in certain outcomes, and thus if the occurrence had not taken place we would be presented with a counterfactual state.    &#13;
&#13;
Follow-up test. The follow up test was added to the experiment to measure whether children with ASD maintained any effects gained from the training past the posttest. Therefore, this test was similar to the pretest and posttest in design; it included a false belief task, a card sorting task and two counter-factuality tasks. However, the materials and stories used were all different from those used previously in the tests and training. The follow-up test took place 6 weeks after the post-test session. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1030">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1031">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1032">
                <text>Ahmed2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1033">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1034">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1035">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1036">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1037">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1038">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1039">
                <text>Charlie Lewis</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1040">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1041">
                <text>Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1042">
                <text>Participants were 30 children with ASD (2 girls, 28 boys; M age = 6,5 years, SD = 24 months). Children, recruited from special education schools in Bahrain, received a diagnosis of ASD by a team of qualified educational psychologists either based on DSM-IV or CARS II and OWL</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1043">
                <text>ANOVA&#13;
mixed effects analysis&#13;
 t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="35" public="1" featured="1">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1099">
                <text>Analogical transfer beyond the analog</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1100">
                <text>Radhika Kuppanda</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1101">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1102">
                <text>Analogical problem solving involves transferring the method used to solve the base analog onto the target analog based on the structural similarity they share. Studies have found that Experts have no difficulty in solving domain specific analogical problems. While, novice problem solvers fail to solve such problems due to their difficulty in retrieving the base analog. Failure to recollect the correct base analog forces the problem solver to solve the problem in an act first think later manner.  They use number of maximizing moves within the problem space to reach the goal state quickly. Use of such maximizing moves in solving analogical problems leads to an impasse, while alternative moves must be sought out. The current study tries to overcome the problem of retrieval of the correct base analog, by implementing an additional factor termed as extra constraint in solving analogical problem. These extra constraint acts in a manner which inhibit the problem solver from choosing problem moves that aim to maximizing their progress to reach the goal state which must essentially be avoided in analogical problem solving tasks. A secondary aim focuses on examining if there exists’ any difference between an adolescent problem solver and adult problem solver. Method: A total of 64 Participants within the age group of, 12-15 and 18-21 years were administered three problems (2 analogical and 1 non analogical). Results: Results demonstrate that the predictor variables (age or money) were not able to predict that participants from the older age category would perform better than the younger age group on any of the problems. Based on second aim, results showed that the older age group able to solve more problems successfully than the younger age group.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1103">
                <text>analogical transfer&#13;
insight problem solving&#13;
extra constraints &#13;
developmental differences&#13;
maximization of progress</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1104">
                <text>The test materials consisted of paper and pencil tasks (see appended booklet). Each Participant was provided with a booklet which consisted of a set of 5 problems, comprising three experimental tasks and two filler tasks.  The first problem was the analogical source problem (sheep dog problem), followed by a filler task (anagram solution). The second problem was the transfer problem (9 ball problem), followed by a second filler task (algebra solution). The last problem was the non-analog problem (cheap necklace problem). There was space provided under each of the problems to allow the participant to work out the solution to each problem. Solutions to each of the problems were also given for the participants. &#13;
&#13;
&#13;
Design and Procedure&#13;
&#13;
The study design comprised of a two between-subjects factors. The first factor is Age (12-15; 18-21 years).  The second factor is Resource (£8 vs. £12). The dependent variable was the number of correct solutions. The aim of the research was to assess whether to two predictor variables, age and money would predict whether the participant would solve the problem correctly or incorrectly. &#13;
&#13;
As per the BPS rules, confidentiality and anonymity of participants were strictly maintained. The study was conducted in a classroom setting with 16 participants being administered the problems at a time. Each participant from each age group was first assigned to low or high resource conditions. 50 % of the participants from older and younger age group received low resource condition (8 pounds) and other 50% high resource condition (12 pounds). Participants received the booklet containing the 3 problems and 2 filler tasks. Each participant was given 5 minutes to attempt each problem.  After five minutes, the solution to each problem is shown. The problems contained in the booklet are as follows:&#13;
¥	Source problem (killer dog)&#13;
¥	Filler task(anagrams)&#13;
¥	Transfer problem (ball problem- £8 or £12 versions); &#13;
¥	Filler task(algebra)&#13;
¥	Non-analogical problem (cheap necklace).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1105">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1106">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1107">
                <text>Kuppanda2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1108">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1109">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1110">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1111">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1112">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1113">
                <text>Tom Ormerord</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1114">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1115">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1116">
                <text>The study was conducted on a total of 64 participants divided into-&#13;
Adolescents (12-15 years) - comprised of 32 participants recruited from schools.&#13;
Adult age group (18-21 years) - comprised of 32 participants recruited from colleges. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1117">
                <text>logistic regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="42" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1259">
                <text>The Impact of Spatial Locations Involving Schema Representations on False Memories</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1260">
                <text>Ji Yun Gan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1261">
                <text>While numerous studies have investigated the effects of schema on false memories, few have looked at how schematic framework involving spatial locations have influenced levels of true and false memories in different age groups. For this study, two separate analyses were conducted; both analyses required participants to study four environment scenes, which contained schema- consistent objects that were placed in either schema-expected or schema-unexpected locations and schema- irrelevant objects. After each scene, a distractor task was presented, followed by the test scene. In the first analysis, false memory rates were examined by adding objects, which were not present during study, into test scenes; in the second analysis, false memory rates were assessed by shifting schema-consistent objects from a schema-expected to a schema-unexpected location or vice versa between study and test scene. In both analyses, target objects that remained in the same location for both study and test scenes assessed for true memories. Three different age groups were studied; younger children aged seven and eight, older children aged nine and ten, and adults who were university students. Results revealed that overall, adults were more schema-bound, and had significantly higher levels of true memories as well as significantly lower levels of false memories compared to younger and older children. Furthermore, schema-inconsistent objects attracted lower levels of false memories across all age groups. However, objects that shifted from a schema-unexpected to a schema-expected location yielded high false memories for object-location pairing. This study is of particular significance to the field of forensic psychology.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1262">
                <text>Schema, false memory, source monitoring, distinctiveness heuristic, object-location binding.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1264">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1266">
                <text>Rachel Coyle</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1267">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1391">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1393">
                <text>The experiment was programmed using a computer software called Psyscript and was run on a Mac laptop. Four different environments were used during the experiment, which were a kitchen, a living room, an office and a bathroom. For the practice run, a separate image, which was a seminar room, was used. All the photographs used were standardized across all four environments, with each photograph being 1300 x 864 pixels, to ensure that the quality and clarity of each photograph was the same. To every environment image, three different versions were prepared for the study scene, ensuring that all six of the schema-relevant target objects had the opportunity to appear in a schema-unexpected location, a schema-expected location, or not being present at all. Moreover, to every version, two test scenes were prepared, to create a variation between which of the target objects that were initially placed in schema-relevant or schema-irrelevant locations during study phase would be shifted during the test scene. Figure 1a is an example of a bathroom scene during study phase and Figure 1b is an example of the test scene for that version. The program had been set to ensure that the sequence of the four different environment images would be pseudo-randomized for counterbalancing purposes, in which all the scenes were presented once, whereas the versions and test scenes selected were randomized. Moreover, the target objects that were circled during the test scenes were also pseudo-randomized, in which each object would only be circled once. For the practice run, both the study scene and the test scene were presented in a hardcopy form, which was laminated. Two separate slips of paper were prepared, one being “Was this object anywhere in this picture before?” for the participants allocated to the Presence condition, and “Was this object in this place before?” for the participants allocated to the Location condition. The paper slips containing the questions were left on the table for participants to refer to.&#13;
&#13;
Figure 1a The above image depicts version 1 of the bathroom scene. The two target objects in schema-expected locations are the shampoo and toothpaste, whilst the two target objects in the schema-unexpected locations are the mirror and toilet brush, and the schema-irrelevant objects are the file, glove, toy.&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
Figure 1b The above image depicts Test 1 of Version 1 of the bathroom scene. The mirror has now been shifted from a schema-unexpected location to a schema-expected location whilst the toothpaste remains in the same position. The shampoo has now been shifted to a schema-unexpected location. The toilet paper and weighing scale, which was previously not present during the study scene is now present in the schema-expected and schema-unexpected location respectively, with the toilet paper being circled for the participant to respond to. The schema-irrelevant objects that were added were the jacket, pencil case and handbag. &#13;
&#13;
Design:&#13;
This study consists of two analyses; to address the two research questions. First regarding whether location affects true and false memories, and second, to see what shifts in location do to memory for the original object memory (condition 1) and object-location pairing (condition 2). The first analysis investigates the true and false memories involving objects that were present and not present at study, whilst the second analysis investigates the true and false memories toward objects that were present at study and were later shifted during the test scene. Hence, a mixed ANOVA design was used to address the first of these questions. The within-subject independent variables include the study (present, not present), and the schema appropriateness of the object location (schema-expected, schema-unexpected, irrelevant). The between- subjects factors include the conditions (presence, location) and age groups (younger children, older children, adults). The “yes” responses for the objects that were present in both scenes but not shifted and objects that were not present during the study scenes but were present in the test scene were analyzed. &#13;
For objects that were shifted during the study and test scenes, the within-subjects factor was schema (schema-expected, schema-unexpected), and the shifting of objects (shift, no shift). The between-subjects factors include the conditions (Presence, Location) and age groups (younger children, older children and adults). The dependent variable was the accuracy of responses given, to compare the difference between objects that shifted and objects that did not shift. &#13;
Procedure:&#13;
The experiment consisted of a study phase, a distracter task and a test phase, which took an estimated 10 minutes to complete and was conducted in an unoccupied learning classroom, in the Burnley Primary School, whereby participants were individually tested. Each participant was required to undergo a practice run before the actual experiment took place, to ensure that the participant had understood what he or she had to do. In the practice run, the laminated image of the seminar room was presented alongside the paper slip with either the Presence question or the Location question, depending on which condition the participant had been assigned to. The participants were given 12 seconds to study the image. After 12 seconds, the participant was presented with another image with several target objects circled, in which the objects would be pointed to one by one by the researcher. The participant would then be prompted to verbally respond if they had either seen that object anywhere before during the study scene or if that object had been in that location before during the study scene. For both conditions, the participants were instructed to press either the “Y” or “N” key on the keyboard in response to whether they had seen the circled object anywhere in the picture before during the study phase (Presence condition), or if they had seen the circled object in that particular location before were it the Location condition. Once the participants acknowledged that they had understood, they were presented with the actual experiment.&#13;
Each participant was required to study four different environments, in which one of three versions would be selected for every environment. Each study scene would last for 12 seconds for the participant to study, and then a distracter task would immediately appear. The distracter task, which lasted for 30 seconds, required the participant to hit any key on the keyboard whenever a specified animal (eg: giraffe, frog, hippopotamus) appeared. A green tick would appear every time the participant successfully presses a key before the specified animal disappears. Once 30 seconds was up, the distracter task would end, and one of two of the test scenes for that environment would appear. A total of twelve objects would be circled sequentially, with the next object only being circled 0.5s after the participant had given a response. Depending on which condition the participant was in, once every object had been circled, the participant would be required to respond to the question “was this object anywhere in this picture before?” (Presence condition), or “was this object in this place before?’ (Location condition). If the participant, who was in the ‘presence’ condition, deemed that the object was somewhere in the picture before, he or she would respond by pressing the “Y” for Yes on the keyboard; if it was deemed to not be in the picture before, the participant would then press the “N” for No on the keyboard. The same thing was conducted for the Location condition. Once the participant had responded to all 12 objects, a different environment scene would appear and the participant would be required to repeat the process until all four scenes had been shown. </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1392">
                <text>A total of 155 participants, representing three different age groups, took part in this research study. The three age groups consisted of younger children aged seven and eight, older children aged nine and ten, and adults, which were university students. 40 older children took part in the Presence condition (mean age=9.52, SE=0.08; 16 males, 24 females) and 38 older children took part in the Location condition (mean age= 9.47, SE=0.08; 10 males, 28 females). As for the adults, 18 university students took part in the Presence condition (mean age=19.67, SE=0.21; 4 males, 14 females) and 18 university students took part in the Location condition (mean age=19.94, SE=0.25; 4 males, 14 females) . For the younger children group, there were a total of 22 participants in the Presence condition (10 males, 12 females; mean age= 7.32, SE= 0.10) and 19 participants in the Location condition (9 males, 10 females; mean age= 7.32, SE= 0.11). The participants for the younger children group were recruited from a school located in Burnley. As the participants were all below the age of consent, consent forms were given to the participants’ parents as a means to indicate that they have allowed their child to participate in this study. This research was given approval by the Psychology Department Ethics Committee, which adhered to both the British Psychological Association and the American Psychological Association’s guidelines.</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="44" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1268">
                <text>Sketch Mental Reinstatement of Context: A Comparison of Autistic and Typically Able Children’s Drawings</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1269">
                <text>Mehar-Un-Nissa Masood</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1270">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1271">
                <text>The increasing number of children coming into contact with the criminal justice system is prompting further research into interviewing children. There is a lack of research in the area of children with developmental disorders such as autism (Mcrory, Henry &amp; Happe, 2007). As sketching is one of the domains in which children develop favourably in comparison to their age matched peers it could be utilised in order to gain the most information. Sketch MRC has been used on typically developing individuals and has been very beneficial for a variety of reasons such as; gives structure to narrative, lessens cognitive demand of interviewer and also lessens social demand of interview. This study aims to see whether content and style of the drawings of typically developing and autistic group are similar. Also correlating data in the sketch to data from the interview recall would give insight into how the act of drawing may be beneficial. A group of 30 children who were either typically developing or autistic were split into 3 groups depending on the results of BPVS 3 and RPM. All children watched a film stimulus and were then asked to recall as much information as possible in a sketch MRC condition. The drawings were then analysed.  Autistic children’s sketches when compared with mental ability matched children showed similarities in; number of salient items, number of items drawn, representational detail, detail in human figure drawings, number of correct, incorrect and confabulation as well as accuracy. A regression model indicated correct number of items recalled in verbal transcript significantly predicted the correct number of items in the sketch. By presenting a significant relationship between number of correct items sketched and recalled it can be said the act of drawing is useful in the sketch MRC condition. This indicates that the sketch MRC condition is just as useful for the autistic individuals as it is for the TD individuals.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1272">
                <text>A between subject’s experimental design was employed with two independent variables: Group, with two levels: (i) autistic, (ii) typically developed, and  Mental ability: low, intermediate, and high. The dependent variable was the drawings which were produced during the interview and were coded using a top down coding scheme measuring the number of correct, incorrect, confabulated items of recall and accuracy. Content including representational detail of human figure drawing and what the individual focuses whether it is on people or the environment. Qualitative analysis attempts to uncover a range of issues such as; is structure used in the sketch, whether the sketches depict movement or a still image, the detail with which the items are drawn, and if the sketch demand interaction.  &#13;
&#13;
Materials&#13;
Film stimulus – Each child individually viewed a non-violent crime film exactly one minute in duration. The stimulus film was one which had been previously used in police training sessions. Keeping in mind ethical guidelines the clip shown had no abuse or violence. The film depicted a busy road with a roundabout, two people walk from around the corner and into a shop. Moments later the two individuals run out of the shop with another individual chasing after them. The clip then ends. &#13;
 	The British Picture Vocabulary Scale: Third Edition (BPVS3) is used in order to act as a distracter task but also determine the child’s mental ability. The BPVS3 plays an important role in assessing a child’s receptive vocabulary, from 3 years up to 16 years of age. &#13;
Ravens Progressive Matrices (RPM) is also required not only to act as a distracter task but also determine the child’s mental ability. The RPM is a nonverbal group test ranging from 5 year olds to the elderly. It consists of 60 multiple choice questions listed in order of difficulty. &#13;
iPad was used to show children the film stimulus with approximately a 8-inch screen. The child was able to hold the iPad themselves to watch the film stimulus. &#13;
Procedure &#13;
Each child was individually taken from their class and shown the film stimulus by an assistant teacher. The researcher did not show the clip to the child, as the child was led to believe that the researcher has never seen the clip before. This was done to make sure the child recalled as much information as possible, and did not presume the researcher already knew it all. Once they had watched the entire film stimulus, the child was brought into a different room by the researcher. &#13;
The researcher then began to carry out the BPVS3. When this was completed the child was asked to work through the RPM and complete the 60 questions. This allowed the child and researcher to build a rapport and also acted as a distracter task from the film stimulus. &#13;
The researcher then explained to the child that for the next part of the experiment the child’s voice would be recorded. The child was asked for their permission and if the child agreed the researcher explained that recording was about to begin. The child was then asked to recall as much information about the video clip as possible, and asked to draw what they remember. Once they had began drawing they were then asked about their drawing with questions such as ‘what is it that you are drawing there?’ They were given as much time as required to complete the drawing. &#13;
Once the drawing was completed, the child was asked to tell the researcher about everything they remembered, and told they were free to use the drawing to help them in the explanation. After the child had told the researcher about everything they remembered in a free recall phase, the child was questioned on what they remembered. For example, if the child said there were two people, the researcher would try and gain some in depth information about these people. The child was then thanked for taking part in the experiment and told that their parent or guardian will be given a gift voucher for them to spend. &#13;
Scoring&#13;
The drawings produced by autistic and typically developed children were coded alongside the transcripts from the interview to aid the understanding of the drawings. A similar approach was successfully adopted in Campbell, Sicovdal, Mupambireyi and Greyson (2010) as it minimised the analysts’ subjective interpretation of the drawings. However, the transcripts themselves were not analysed as they form the dataset of another PhD project. The rationale for using the transcripts is to aid understanding of the drawing is offered by &#13;
Each drawing was analysed using a three-step framework (see Fig.1) which started by analysing to what extent sketches represented the event that was witnessed. This was done to determine whether the sketch was successful in depicting the TBR event. The second step involved further analysing the items in the drawing, focusing on correctness. The final step examined representational detail and differences in what groups focussed upon, as well as qualitative analysis.  &#13;
The first step of analysis shed light on the overarching aim of the study and to gain an idea about how the sketches depicted an illustration of the film stimulus. A gross measure of the sketches was taken, which took into consideration the total number of attributes, to give an understanding of how detailed these sketches were. To determine whether the sketches successfully depicted what was shown in the film stimulus, the five most salient aspects of the TBR event were defined as follows: a road, cars, two individuals, shop, and another individual (the victim). One mark was awarded for each aspect depicted in the sketch, giving a possible total completeness score of 5. &#13;
The following step in analysis was to bring to attention correctness scores. Every item drawn in the sketch was determined as correct, incorrect (sketching one person going into the shop instead of two) or a confabulation (sketching a detail that was not present in the film stimulus). Accuracy was calculated by dividing total number of correct items sketched by total number of items. The items were then divided into three groups whether they illustrated people or environment. Using the PhD projects data a correlation is carried out to see whether total number of items  and total number of correct items depicted in the sketch correlates with total number of items and total number of correct items recalled in the transcript. This would help understand how useful the act of sketching rather than focusing on the sketches content.  &#13;
As it was of essence to capture representational details human figure drawings were recognised on their complexity according to Cox and Parkins (1986) classification system of human figure drawings. In this stage data will be analysed qualitatively in order to gain a better understanding of the sketches. &#13;
&#13;
 &#13;
Figure 1. Concepts guiding analysis of drawings.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1273">
                <text>Masood2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1274">
                <text>Nicola Cook</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1275">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1276">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1277">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1390">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1278">
                <text>Dr Tom Ormerod</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1279">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1280">
                <text>Autism</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1281">
                <text>Participants&#13;
	Autistic group – Fifteen children, between 5-16 years, of mixed genders, with autism were recruited from special schools in England. They had been given a formal diagnosis of autism by an appropriately qualified clinician according to current diagnostic criteria; DSM-IV (APA, 1994) and ICD-10 (WHO, 1993). &#13;
	Typically Developing (TD) Group – Fifteen typically developing children, between 5-16 years, of mixed genders, were recruited from a state primary school in England. None of the children were known to have any symptoms associated with autism or Aspergers. &#13;
	To ensure the TD group and autistic group were comparable in terms of their drawing skill, both groups were matched according to their performances on Raven’s Coloured Progressive Matrices (RCPM) (Raven, Court &amp; Raven, 1983) and the British Vocabulary Scale: Third edition (BPVS 3) (Dunn, Dunn, Whetton &amp; Burley, 1997). Descriptive information about participants is given in Table 1. An independent t-test confirmed that autistic and typically developing groups did not differ significantly on RCPM raw scores t(28) = -0.61, p = 0.54. Submitting the BPVS 3 raw score to independent t-test failed to reveal a significant effect of group (t(28) = 0.26, p = 0.78). Thus, the autistic and typically developing groups had overlapping ranges in both the RCPM and BPVS 3.&#13;
	Each autistic child was matched with a typically developing child that had the closest score in both the BPVS 3 and RCPM. For example, an autistic child who had scored 87 and 23 on the BPVS 3 and the RCPM respectively was matched with a typically developing TD child who scored 87 and 22 respectively. Participants were then assigned to one of three groups, depending on how they performed in  the tests. Those who scored lowest were assigned to the low mental ability group, those that scored highest were assigned to the high mental ability group, and those whose which scored in the middle were assigned to the intermediate mental ability group. ANOVA confirmed a significant difference between the three groups in both the BPVS 3 F(2, 27) = 33.90, p&lt; 0.01) and the RCPM F(2, 27) = 6.59, p&lt; 0.05 thereby justifying splitting the groups in such a manner.&#13;
All participants were naive to the experimental aims and hypotheses. Written consent was obtained from parents. Gift vouchers were given to parents as a reward on their child’s completion of the experiment.&#13;
&#13;
Table 1 Means, standard deviations (SDs), and ranges for Raven’s Coloured Progressive Matrices (RCPM) score, and the British Picture Vocabulary Scale (BPVS 3) score for the Autistic and Typically Developing (TD) groups &#13;
Group	N	Mean	Standard Deviation	Range&#13;
RCPM				&#13;
Autistic	15	22.00	7.55	7.00-34.00&#13;
Typically Developed	15	23.6	7.28	7.00-34.00&#13;
BPVS3				&#13;
Autistic	15	118.73	22.95	87.00-159.00&#13;
Typically Developed	15	116.33	27.35	74.00-159.00&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1282">
                <text>ANOVA, </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
