<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?collection=6&amp;output=omeka-xml&amp;page=2" accessDate="2026-05-01T19:14:42+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>2</pageNumber>
      <perPage>10</perPage>
      <totalResults>24</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="90" public="1" featured="0">
    <fileContainer>
      <file fileId="47">
        <src>https://www.johnntowse.com/LUSTRE/files/original/b1774444318c8b53bba03e2c298cbc26.pdf</src>
        <authentication>88f0277eccd48de43dbb1ad44ed9cb74</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2053">
                <text>An investigation into automatic imitation: Comparing live and video setups, the effect&#13;
of prior training and the influence on affective empathy</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2054">
                <text>Evangelos Baltatzis&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2055">
                <text>2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2056">
                <text>If decreased Automatic Imitation(AI) improves empathetic abilities, then selfother distinction processes are probably the mediating factor between imitation and&#13;
empathy. But if increased AI improves empathy, then probably imitation is at the core&#13;
of the socio-cognitive functions. Until now, it was shown that decreased AI improved&#13;
visual perspective taking, corticospinal empathy and self-reported empathy. Also, the&#13;
studies until now focus on video AI stimuli. But to understand whether AI has a more&#13;
direct relation to mimicry, I developed also live paradigm. My research questions&#13;
were firstly, what effect will imitation training and inhibition training have on AI.&#13;
Secondly, whether live stimuli AI will have the same effects on AI testing (inhibition&#13;
versus imitation) and arousal empathy testing. Thirdly, whether the effects are&#13;
transferable on arousal empathy. As expected, there was a significant decrease of AI&#13;
in the video inhibition condition in comparison to the video imitation condition.&#13;
Unexpectedly, a significant, but weak increase in arousal empathy was observed in&#13;
the video imitation condition and not in the video inhibition group. The difference in&#13;
AI and arousal empathy between the life imitation group and the life inhibition group &#13;
were not significant. The results give a new perspective on the topic of AI. If the&#13;
results can be reproduced by more studies, then probably imitation is more important&#13;
than self-other distinction processes or maybe arousal empathy is different from other&#13;
forms of empathy. Finally, the insignificant results in the life imitation versus life&#13;
inhibition training indicate that there are maybe confounding factors in live AI&#13;
research or that the video AI designs are more artificial than it is assumed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2057">
                <text>automatic imitation, empathy, imitation training, inhibition&#13;
training, mirror neuron system, self-other distinction</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2058">
                <text>Participants&#13;
Sixty(N=60) participants were recruited in a two-by-two factorial design, and&#13;
divided equally among two between-subject factors. The first factor is Stimulus,&#13;
whereby AI will be measured in response hand actions performed by an experimenter&#13;
sat across a table from the participant (Live) or to the actor’s pre-recorded hand-action&#13;
stimuli presented on a monitor. The second factor is Training, whereby participants&#13;
will undertake a brief period of imitating the actions of the live or videoed handaction stimuli (IMI) or performing the opposite actions (IMI-IN).&#13;
The participants were recruited from students of the University of Lancaster.&#13;
Random selection could not be used, because of practical and logistical difficulties.&#13;
Hence, most of the participants were Masters students and some were PhD students.&#13;
The participants were either friends and acquaintances or they had the motivation to &#13;
win a 10 pounds amazon voucher. In many cases they had the motivation to&#13;
participate in my study, because they also wanted me to participate in their study.&#13;
Firstly, we conducted the experiments with the video paradigm (15 participants in the&#13;
imitation training condition and 15 participants in the inhibition training condition)&#13;
and then we conducted the experiments of the live paradigm (15 participants in the&#13;
imitation training condition and 15 participants in the inhibition training condition).&#13;
We use random assignment for the recruitment in the training condition, thusly, every&#13;
participant was randomly assigned in either the imitation training or in the inhibition&#13;
training condition. For instance, we did not conduct first 15 experiments in the&#13;
imitation training condition and then 15 experiments in the inhibition training&#13;
condition, but every participant was in a different training condition. Nevertheless,&#13;
one possible limitation may be that we did not do the same for the stimulus condition,&#13;
as we conducted first the experiments of the video condition and then the experiments&#13;
of the real condition.&#13;
Materials&#13;
The experiment was conducted on the personal laptop of the researcher. No&#13;
specific room was needed for the experiment to have more flexibility with the data&#13;
collection. The software Mathlab and the program Cogent were used to code and&#13;
make the script. In order to measure affective empathy, we used the Multi-faceted&#13;
empathy test (MET). It consisted of 40 images, but it was split in two METS to&#13;
include also a pre-test approach. For imitation training and for inhibition training we&#13;
used three images of the hand of the researcher. In one image, the hand of the&#13;
participant was in the neutral position. In the second image, the index finger was&#13;
lifted and in the third image the middle finger was lifted.&#13;
Design and Procedure&#13;
First, we conducted the experiments of the video paradigm (30 participants)&#13;
and then we conducted the experiments of the live paradigm. The experimental&#13;
procedure was divided into four phases: First, participants will do the MET&#13;
(Multifaceted empathy test). The first Met had 22 images. The MET tests affective&#13;
empathy. Participants must choose from a scale from 1 until 4 how strong is their&#13;
affective arousal when they see the image. The MET took approximately 5-10&#13;
minutes, depending on the participants.&#13;
After the first MET, participants did either imitation training or imitation&#13;
inhibition training. The default position for the participants in this task was to press&#13;
two buttons all the time with their right-hand index and middle finger. In the video&#13;
condition, they pressed button A and button Z -with their right-hand index and middle&#13;
finger, and in the live paradigm they pressed the left and right arrow button -with their&#13;
right-hand index and middle finger respectively. In imitation testing, they had to lift&#13;
their index finger when they saw a lifted index finger (video or live) and to lift their&#13;
middle finger when they saw a lifted middle finger (video or live). Both actions&#13;
should be done as quickly as possible. In the inhibition training the participants did&#13;
the opposite actions of the observed movements. Thus, when they saw a lifted index&#13;
finger, they lifted their middle finger as quickly as possible. When they saw a lifted&#13;
middle finger, then they lifted their index finger, again, as quickly as possible. The&#13;
training phase consisted of two tasks and a small break. Every task had a duration of 6&#13;
minutes approximately.&#13;
After the training, there was the testing phase. Here we tested the effects of&#13;
training on Automatic Imitation. The training phase consisted of two 6 minutes tasks &#13;
with a break between the two tasks. In the first task, the participants had to lift only&#13;
their index finger as quickly as possible, irrespective of the lifted finger they saw&#13;
(either in video or in the live condition). In the second testing task, they had to lift&#13;
their middle finger as quickly as possible, again irrespective of the lifted fingers that&#13;
they saw.&#13;
Automatic imitation is measured as the difference in their latency to lift the&#13;
pre-defined finger when the observed action is the same in relation to when the&#13;
observed action is the opposite finger movement. For instance, when the participant&#13;
lifts his index finger, we measure the reaction time of his movement, when he sees a&#13;
lifted index finger and when he sees a lifted middle finger. Automatic imitation is the&#13;
difference of those two reaction times. This testing phase lasted 10 minutes,&#13;
comprising 100 trials divided among two blocks. After the lifting of the finger, the&#13;
participants pressed the button again (default position). Thusly, the reaction times&#13;
were measured by how fast the participant would lift his finger.&#13;
To ensure that the training and the testing really focused on Automatic&#13;
Imitation and to exclude the spatial compatibility confounds, the participants were&#13;
perpendicular to the stimuli (in both the video and the live condition). Sadly, we could&#13;
not have the same perpendicular angle for both conditions, but the difference of the&#13;
degrees was very small. In the video condition, the angle was approximately 45&#13;
degrees (the fingers of the participants were at the buttons A and Z and the stimuli&#13;
were on the laptop screen) and on the live condition, the stimuli were approximately&#13;
90 degrees perpendicular (the fingers of the participants were on the right and left&#13;
arrow and the real stimulus of the experimenter was at the buttons “tab” and “shift”).&#13;
In the final phase, the participants did a second MET test. It was exactly like&#13;
the first, only with different images. The order of the MET tests was changed with&#13;
every participant. In other words, one participant did first the MET.1 and in the end&#13;
the MET.2, while the other participants did first MET.1 and in the end, they did the&#13;
MET.2. Both MET tests different parts of the same MET test, but we splitted the test&#13;
arbitrarily in the middle to have also a pretest empathy base. I changed the order of&#13;
the MET tests with every participant to exclude the factor that some pictures of the&#13;
Test are less difficult than the others. Thus, if we find a large and statistical significant&#13;
difference in the final MET between the imitation and the inhibition training group,&#13;
then we can say that in both training conditions we changed equally the order of the&#13;
MET tests, so the observed change in empathy performance does not have to do with&#13;
some images being easier or more difficult than the others.&#13;
In the IMI condition, the participants were required to lift their index finger&#13;
when they see the stimulus hand (live or videoed) perform an index-finger action, or&#13;
lift their middle finger when they observe a middle-finger action; in the IMI-IN&#13;
condition they will do the opposite - they will lift their index finger when they&#13;
observe a middle-finger action or lift their middle finger when they see an indexfinger action.&#13;
In the second phase, the participants performed AI testing, during which they&#13;
will be required to make a pre-defined finger-lifting movement (index- or middlefinger lifting action) as soon as the stimulus hand (live or videoed) moves, regardless&#13;
of whether the observed movement is an index- or middle-finger lifting action. In the&#13;
third phase, participants will perform the Multi-Faceted Empathy Test, during which&#13;
they will be presented with 30 images of individuals expressing emotions and asked &#13;
to judge which emotion is being expressed. The accuracy of their responses will be&#13;
recorded. This final phase takes 10 minutes.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2059">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2060">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2061">
                <text>Baltatzis2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2062">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2063">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2064">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2065">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2066">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2067">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2068">
                <text>Dr. Daniel Shaw&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2069">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2070">
                <text>Social psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2071">
                <text>60 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2072">
                <text>ANOVA, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="72" public="1" featured="0">
    <fileContainer>
      <file fileId="26">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e3ec9a2d7e322ed9e9f3c933eeb1c0f7.pdf</src>
        <authentication>78a4d22dd75eb0dea0177eebcfb5a978</authentication>
      </file>
      <file fileId="64">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c6ac751946b14a68bfe4f2d19f12bd28.csv</src>
        <authentication>09531ca3ced3d74db868d28231d85358</authentication>
      </file>
      <file fileId="65">
        <src>https://www.johnntowse.com/LUSTRE/files/original/745d3a46a3075a757a76127c26b40b88.csv</src>
        <authentication>2a37d3b9b0dc8eee14572e5989f5e5b9</authentication>
      </file>
      <file fileId="66">
        <src>https://www.johnntowse.com/LUSTRE/files/original/eacb3785916f054ab50505311197fa3e.csv</src>
        <authentication>7ce9e588ce7f7c8b54751f5459edfc25</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1696">
                <text>The effects of ambient temperature on aggressive cognitions&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1697">
                <text>Melissa Barclay</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1698">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1699">
                <text>The world is getting warmer and it is of interest to researchers to explore how changes in temperature experience affect human behaviour. The heat hypothesis suggests that an increase in heat is associated with an increase in antisocial behaviour (e.g. violence, aggression). However social embodiment studies have also demonstrated hotter temperatures to be associated with less antisocial behaviour (e.g. greater gift giving). This study investigated whether higher ambient temperatures are associated with more or less antisocial responding using a controlled laboratory approach. Participants were placed into either a cold room or a hot room whilst they completed two tasks that implicitly measured the accessibility of aggressive cognitions. Using a combination of linear mixed effects analyses and regression analyses, the results demonstrated that there was no significant difference between the two temperature conditions concerning the accessibility of aggressive cognitions in a lexical decision go/no-go task and a word fragment completion task. Consequently the heat hypothesis and theories based upon a social embodiment framework were not supported in this case. Possible alternative explanations and limitations of the study are discussed regarding the inconsistent results to that proposed by particular theoretical frameworks and illustrated in previous research. Directions for future research are suggested in light of the present findings. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1700">
                <text>Ambient, temperature, aggression&#13;
&#13;
Linear mixed effects modelling, regression, correlation&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1701">
                <text>Participants&#13;
	In total, 65 participants took part in this study. Unfortunately the preregistered sample size figure of 120 participants was unable to be reached due to recruitment limitations. Participants were recruited via Lancaster University’s SONA system, via adverts, were friends of the researcher or were recruited on an opportunistic basis around the Lancaster University campus. As a reward for participating, participants were entered into a prize draw to win one of 12 £10 Amazon vouchers.  	Participants were excluded if they met any of several a priori agreed upon rules for exclusion: (a) non-native English speaker, or (b) made a connection in the debrief section between the room temperature and aggression measurements. Three participants were excluded from the analyses on this basis. Therefore 62 participants data remained in the analyses. Demographic information was obtained using questions on the Qualtrics survey (Qualtrics, Provo, UT). The mean age of participants was 25.29 years (SD = 8.83; 43 female, 19 male). It was preregistered that participants must be between 18 and 55 years of age however due to the prospect of increasing the sample size, the age range was increased to 18 to 60 years of age. Participants were randomly assigned to the cold condition (n = 31) or the hot condition (n = 31).&#13;
&#13;
Materials&#13;
	Lexical decision go/no-go task. A lexical decision go/no-go task was used to gauge the accessibility of aggressive cognitions. The standard lexical decision task (LDT) is an indirect measure of semantic activation of specific constructs (e.g. aggression) and is an excellent method to assess the activation of such semantic networks (Marsh &amp; Landau, 1995; see Parrott, Zeichner &amp; Evces, 2005). Advantageously, as the task does not require conscious expression, it is not easily affected by demand characteristics (see Greitemeyer &amp; Osswald, 2011). The LDT task was used in conjunction with a go/no-go response whereby participants are instructed to respond as quickly as possible to a word (alike to the LDT) but to withhold any response if the presented stimulus is a nonword. The lexical decision go/no-go task has been demonstrated to be an excellent alternative to the standard LDT but also measures performance in a similar manner (Perea, Rosa &amp; Gomez, 2002). Essentially, network activation is measured by the response latency with which participants respond to particular stimulus words, with faster reaction times (RTs) demonstrating more accessibility of the target construct (i.e. aggression) (Forster &amp; Davis, 1984; Johnson &amp; Hasher, 1987; Schacter, 1987; Morton, 1970). Specifically, faster RTs to aggressive words by participants in the hot condition, compared to the cold condition, would suggest that the construct of aggression is more accessible in hotter conditions. &#13;
	The lexical decision go/no-go task included the presentation of one hundred letter strings; 25 of which were aggressive-related words (e.g., gun), 25 of which were nonaggressive words (e.g., leaf) and 50 of which were nonword letter strings (e.g., breaff). The aggressive-related words were taken from Anderson, Carnagey &amp; Eubanks (2003) and Johnson (2012). The non-aggressive items were extracted from Anderson et al. (2003) or chosen by the experimenter. Three independent raters who were blind to the study aims assessed the nonaggressive and aggressive words to determine if they were appropriately determined as nonaggressive or aggressive respectively. Fleiss Kappa demonstrated perfect agreement between the three individuals judgments, κ = 1, p &lt; .0001, indicating that the raters agreed that all items coded as aggressive or nonaggressive were appropriately coded as such. Nonword letter strings took the form of pseudowords to prevent the possibility of participants classifying the words by a simple surface analysis of substrings. To illustrate this, a letter string consisting of “xx” can be quickly and easily recognised as a nonword without in-depth processing because no valid English words contain “xx” (see Bösche, 2010). &#13;
	Furthermore, research has demonstrated that more frequent words (e.g. Perea et al., 2002) and shorter words are responded to quicker (e.g. Spieler &amp; Balota, 2000). Given this, the word frequency of each real word (i.e., aggressive-related words and nonaggressive words) was obtained using the SUBLECT database (Van Heuven, Mandera, Keuleers, &amp; Brysbaert, 2014) and the word type categories were matched on word length. According to Welch's t-test, there was no significant difference between the aggressive related words and nonaggressive words in terms of word frequency, (t (40) = 1.64, p = .12), and word length, (t (48) = 0, p = 1). Together this reduces the effect that word length and frequency might have on response latencies.  &#13;
	In the lexical decision go/no-go task, participants were instructed to respond by pressing the ‘spacebar’ key on the keyboard when presented with a valid English word (i.e. go response) however to withhold any response if presented with a nonword (i.e. no-go response). The experimental trials consisted of 50 real word letter strings and 50 nonword letter string trials. The onset of each trial was marked by a plus sign (+), which acted as a fixation point for the participant. After a 1000ms latency, the fixation point was replaced by a letter string. This stimulus item disappeared after a latency of 3000ms and was followed by the next fixation point and then the next letter string was presented automatically in the same aforementioned fashion. The presentation and randomization of letter strings, and the recording of response latencies were controlled by JavaScript code running on Qualtrics. &#13;
&#13;
	Word Fragment Completion (WFC) Task. To measure the activation of aggressive thoughts participants also completed a WFC task consisting of 50 word fragments (adapted from Anderson et al., 2003). Using Qualtrics, participants filled the blanks with letters to form a valid English word within a five-minute timeframe. Of the 50 word fragments, 25 could be completed to form either a nonaggressive word or aggressive word (e.g., “ki__” could be completed with “kill” or “kite”). The other 25 word fragments could be completed with only nonaggressive words. Only the word fragments with possible aggressive-completions were used in the analyses. The remaining 25 fragments were used as a decoy to ensure that participants would not guess that aggression was being measured. If a word could not be completed, participants were required to leave the answer box blank. This task is a valid measure of aggressive cognitions (Anderson et al., 2003). The outcome variable of aggressive cognitions was calculated by dividing the number of word fragments that were completed as aggressive words by the total number of word-fragment completions that could be completed as aggressive. Fragments were presented in a randomized fashion for each participant controlled by Qualtrics. &#13;
&#13;
	Baseline Temperature Comfort. A measure of baseline temperature comfort level was also included in the Qualtrics survey, which measured how cold or hot the participant generally feels. A rating scale of -50 to +50 measured this, where a higher score indicates a feeling of generally hotter. Many aspects can affect an individual’s thermal perception and comfort ranging from physical to cultural aspects (Laskari, et al., 2017; see for e.g. Djamila, 2017). For example, body temperature deviations can have their roots in physiology such as age (Castle, Norman, Yeh, Miller &amp; Yoshikawa, 1991). These factors vary across individuals, raising the possibility that individuals have baseline temperatures or comfort levels that differ systematically from the average population (Obermeyer, Samra, &amp; Mullainathan, 2017). In other words, the same temperature that is normal for one person might be cold for another. Given this, variations in an individuals subjective measure of baseline temperature comfort will be explored to see whether this moderates temperature effects on aggressive cognitions. &#13;
	&#13;
	Outside Temperature. A measure of outside temperature was not originally planned and its inclusion was not preregistered. However data from the local weather station was used to calculate outside temperature during each testing session. Overall, the mean outside temperature was 18.6°C (SD = 2.91) and ranged from 12.6 to 22.9°.&#13;
&#13;
Procedure and Design &#13;
 	Participants were welcomed into either the cold or hot room depending on their random allocation. The room temperature reading before each testing session began was recorded, which demonstrated that the range of temperatures for all sessions was at 15.5-16.9°C (M = 16.14, SD = 0.39) and 27.8–29.8°C (M = 28.56, SD = 0.60) for the cold and hot condition respectively. The heat-controlled room consisted of five workplaces equipped with conventional PCs allowing for simultaneous data collection from five participants at one time. Participants were separated from each other by partitions between the workstations. Whilst at a workstation, participants received the study information and gave their consent to participate. They then completed four decision making tasks using the Qualtrics survey software, two of which measured the accessibility of aggressive thoughts (i.e. lexical decision go/no-go task and WFC task) and two of which measured cognitive ability (as part of another students MSc project). All instructions concerning the tasks were given via the computer. The four tasks were presented in a randomised order between participants by Qualtrics to reduce order effects (e.g. participants may be tired for tasks at the end) and carryover effects (e.g. earlier tasks may influence behaviour on subsequent tasks) (see Shaughnessy, Zechmeister &amp; Zechmeister, 2006). &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1702">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1703">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1704">
                <text>Barclay2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1705">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1706">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1707">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1708">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1709">
                <text> Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1710">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1711">
                <text>Dermot Lynott</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1712">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1713">
                <text>Cognitive Psychology&#13;
Social Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1714">
                <text>65 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1715">
                <text>Confirmatory Analysis&#13;
Exploratory Analysis&#13;
Regression Analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="71" public="1" featured="0">
    <fileContainer>
      <file fileId="25">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2d240b7ef45b825fd4cfdb477cc8aa00.pdf</src>
        <authentication>9b4db285519912b22505ae113ad6ad1b</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1675">
                <text>Contrast polarity of a stimulus does not affect the cueing effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1676">
                <text>Eleni Sevastopoulou</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1677">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1678">
                <text>According to the contrast polarity effect, people’s attention is sensitive to dark objects within light backgrounds. According to the gaze-cueing effect, a human gaze shift attracts people’s attention towards the direction of the darker region of the observed eyes, thus the gaze-cueing effect depends on the contrast polarity of the observed eyes. Therefore, a human gaze is perceived as a darker spot within a lighter background. In the present study, combining the contrast polarity effect and the gaze-cueing effect we examined whether the colour contrast between a black and a white square that suddenly flip on a computer screen can have a similar effect to that of gaze-cueing. The prediction was that participants would perceive the side where the black square moved after the flipping as attentional cue, therefore when an object appeared on the side that the black square moved, reaction times would be shorter compared to when the object appeared on the opposite side. The results showed that reaction times in the two conditions did not differ significantly. Thus, the contrast polarity of a stimulus does not affect the cueing effect. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1679">
                <text>Gaze cueing&#13;
Contrast polarity&#13;
Gaze perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1680">
                <text>The experiment was conducted using a single within-subject design. The independent variable was the cue congruency, which consisted of two conditions: the object appeared either congruently or incongruently with the attentional cue. The dependent variable was the reaction times of the participants which were measured in millisecond (ms). &#13;
Procedure. Each participant was tested individually in a quiet room at the library of Lancaster University. Participants were tested at different days and times, including morning and evening hours. The only people present in the room during the conduct of the experiment were the participant and the experimenter.&#13;
In the beginning, participants were asked to read the experiment instructions from the computer screen and they were also given clarifications, if needed, by the researcher. Afterwards, the experiment started and two squares, one black and one white, sharing one side were presented on the screen for half a second. The side that the squares shared was located at the centre of the screen, therefore one square appeared on the left side of the screen and the other one on the right side of the screen. Then the squares flipped and changed position and the apparent motion of the two squares was the cue. One second after flipping, the squares disappeared and a picture of an object randomly appeared either on the left or on the right side of the screen for one more second. Afterwards, the object disappeared and the screen remained blank. &#13;
The task of the participants was to press the appropriate keyboard button as fast and as accurately as possible, depending on the side of the screen where the object appeared. So, they had to press the «Q» button on the keyboard when the object appeared on the left side of the screen or the «P» button when the object appeared on the right side of the screen. They were given one second to respond to the object appearance. The sequence of the trials was the same for every participant. Each one of the 6 objects appeared on total 30 times congruently with the cue and 30 times incongruently with it. Thus, the total number of trials for every participant was 360, 180 trials that the objects appeared congruently with the cue and  180 that they appeared incongruently with it. The experiment lasted for 20 minutes for each participant and at the end of every session, a message appeared on the screen which informed the participants that the experiment was over.&#13;
The prediction was that the side where the black square would move after the flipping would be perceived as an attentional cue by the participants. Their gaze would be attracted to the cue and an effect similar to the gaze-cueing effect would appear. So, their reaction times would be shorter for the trials where the objects would appear on the same side with the attentional cue compared to the trials where the objects would appear on the opposite side of the cue. The independent variable was the cue congruency which included two conditions, the congruent trials (when the object appeared on the same side with the cue) and the incongruent trials (when the object appeared on the opposite side of the cue). &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1681">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1682">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1683">
                <text>Sevastopoulou2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1684">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1685">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1686">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1687">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1688">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1689">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1690">
                <text>Dr. Eugenio Parise</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1691">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1692">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1693">
                <text>25 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1694">
                <text>t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="63" public="1" featured="0">
    <fileContainer>
      <file fileId="53">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6706f99fb62f6749b7c0d33bae37059f.pdf</src>
        <authentication>38f45aae780ada036b447d77607c2a80</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1552">
                <text>Investigating the effects of dimensionality and referent variability on word learning in autism and typical development.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1553">
                <text>Fiona Smith&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1554">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1555">
                <text>Dimensionality, referent variability, word learning.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2142">
                <text>The ability to learn words from pictures could give children another forum to develop&#13;
their lexical understanding and vocabulary. This is particularly important for children&#13;
with developmental disorders such as Autism. This research investigated how word&#13;
learning processes (referent selection, retention and generalisation) in autism and&#13;
typical development are influenced by learning from pictures and objects, including&#13;
single and multiple exemplars of symbols. The participants in this study were 16&#13;
typically developing children, M age=3.68, the TD group was composed of 7 males&#13;
(43.75%) and 8 females (56.25%). And 16 children diagnosed with ASD, M&#13;
age=9.37, 8 males (50%) and 8 females (50%). Participants looked at pictorial and&#13;
object referents. This was to differentiate whether there was a preference in word&#13;
acquisition and retention, depending on the structure of the stimuli. It was expected&#13;
that word referent selection, retention and generalisation would be more accurate in&#13;
the object condition compared to the picture condition, as participants would not be&#13;
relying of picture-word-associations. Participants also examined words paired with&#13;
either single or multiple exemplars of referents, to determine whether multiple &#13;
exemplars of shaped matched referents would promote shape-based generalisation&#13;
in the ASD group, which has been shown to be impaired (Hartley and Allen, 2014).&#13;
It was expected that retention would be superior when learning directly from objects&#13;
in both the ASD and TD groups, which was found in this research. We also&#13;
anticipated that labelling from multiple exemplars, rather than single exemplars,&#13;
may scaffold more consistent shape-based generalisation. We found that referent&#13;
selection was more accurate in both groups in the multiple exemplar condition&#13;
compared to the single exemplar condition. The implications of this research are&#13;
that we can further understanding of how symbols or objects benefit word learning,&#13;
retention and generalisation in ASD or TD children. And whether there are any&#13;
cognitive differences in the ASD and TD groups when it comes to word learning&#13;
processes. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2143">
                <text>Participants&#13;
The participants in this study were 16 minimally verbal children with ASD (M age =&#13;
10.42 years, SD = 3.29) and 16 typically developing children (M age = 3.64, SD =&#13;
1.64).&#13;
Children with ASD were recruited from the specialist schools Dee Banks School in&#13;
Chester, and Hinderton School in Ellesmere Port. Typically developing children were&#13;
recruited via opportunity sampling, via the social media platform Facebook through&#13;
advertisement. &#13;
All the children with ASD received their diagnosis from a qualified clinical or&#13;
educational psychologist. This was obtained using standardised instruments (i.e.&#13;
Autism Diagnostic Observation Scale and Autism Diagnostic Interview—Revised;&#13;
(Lord, Rutter &amp; Le Couteur, 1994, Lord, Rutter, DiLavore &amp; Risi, 2002) and expert&#13;
judgment. Clinical diagnosis was confirmed for children with autism using the&#13;
Childhood Autism Rating Scale (CARS; Schopler, Van Bourgondien, Wellman &amp;&#13;
Love, 2010), which was completed by a class teacher (Raw Score M score = 37.26,&#13;
Raw Score range = 27 – 53.5). The ASD were tested for non-verbal vocabulary using&#13;
the British Picture Vocabulary Scale (BPVS; Dunn, Dunn, Whetton, &amp; Burley, 1997),&#13;
which was conducted by the experimenter. Mean receptive vocabulary of children&#13;
with autism was years 2.84 (range = 6 years – 2 years 4 months).&#13;
Some of the children diagnosed with ASD who participated in this study were current&#13;
PECS-users with impaired expressive language skills. Most of the children with ASD&#13;
who participated in this study were functionally non-verbal (no spoken words),&#13;
although, some produced speech of 1–2 words in length (however, much of this was&#13;
echolalia) and one child could speak some short phrases over three words in length.&#13;
Therefore, the sample was linguistically representative of children with ASD who&#13;
receive and may benefit from picture-based communication interventions. Participants&#13;
had 1–6 years’ experience of using PECS.&#13;
When recruiting the children diagnosed with ASD, the experimenter emailed&#13;
specialist schools, explaining the study and whether the school would be interested in&#13;
participating. When recruiting the TD children, advertisements were put on social &#13;
media platforms such as Facebook (see Appendix A). The information poster&#13;
instructed the parents to contact the experimenter via email if they were interested in&#13;
their child participating.&#13;
The study was approved by the Lancaster University Ethics Committee and informed&#13;
consent was obtained from parents before children were included in the study.&#13;
See Appendix B for completed and approved Lancaster University Ethics Committee&#13;
form&#13;
Materials&#13;
For the warm up test trials in all tests the participants were shown three familiar&#13;
objects (for example; dog, bus, chair), these were small laminated pictorial symbols.&#13;
In the picture, single and multiple exemplar conditions the participants were shown 12&#13;
laminated pictorial symbols, 6 familiar and 4 novel. The participants saw each novel&#13;
symbol once and the novel symbol twice. Participants saw the same named novel&#13;
symbols in the retention test trial, in this trial the named novel objects were shown to&#13;
each participant twice. In the generalisation test trial, the participants saw shape&#13;
matches (same object or picture, for example both would be paperclips) to the named&#13;
novel objects from the referent selection test trial and retention test trial, however they&#13;
were different colour variations (for example a red and blue paperclip). In the object&#13;
condition participants followed the same test layout and number of referents as the&#13;
other conditions, the difference being that the stimuli were actual objects compared to &#13;
pictorial symbols. The words for the familiar stimuli were gathered using the CDI&#13;
database (Fenson, Dale, Reznick, Bates, Thal, Pethick, Stiles, 1994) and appropriately&#13;
aged matched to the non-verbal age range of the ASD children and the chronological&#13;
age of the TD children (See Appendix C). The words for the novel stimuli were&#13;
picked from the NOUN database (Horst &amp; Hout, 2016), these were picked to all be&#13;
two syllables long and words to have different phonological sounds per set. In the&#13;
picture condition were, Gloop, Virdex, Akar and Teebu. For the novel words for the&#13;
object condition were, Fiffin, Tranzer, Brisp and Pentants. For the single exemplar&#13;
condition the novel word were, Tulver, Kaki, Jefa and Blicket. For the multiple&#13;
exemplar condition the novel word were, Zepper, Toma, Modi and Chatten (see&#13;
Appendix D)&#13;
Objects were obtained through the equipment assistant at Lancaster University and&#13;
purchased through amazon. Appendix E is an example warm up selection trial which&#13;
a participant saw, and the response form completed by the experimenter. Appendix F&#13;
is an example of a referents selection trial, which participant saw, and the response&#13;
form completed by the experimenter. Appendix G is an example of a retention test&#13;
trial a participant will have seen, and the response form completed by the&#13;
experimenter. Appendix H is an example of the generalisation test trial which a&#13;
participant saw, and the response form completed by the experimenter. All test trials&#13;
were pseudorandomised per participant per condition and trial. Therefore, while all&#13;
the participants will have seen the same number of familiar and novel objects or&#13;
pictures. And each picture or object will have had the same name per shape matched&#13;
object they will have been in a different order. Therefore, a different response form&#13;
was required per participant, for the change in referent location and set order. &#13;
Procedure&#13;
Prior to the children participating the parents received, the information sheet (see&#13;
Appendix I), and the consent form (see Appendix J). On the last day of experiments&#13;
the experimenter brought the debrief forms (see Appendix K).&#13;
Participants were test individually, in their schools for the children with ASD or in&#13;
their own homes for the TD children, and were always accompanied by a familiar&#13;
adult, teaching assistant or parent. The participants were seated at a table opposite the&#13;
experimenter; the materials were placed within reaching distance of the participants.&#13;
Children were reinforced throughout the session; correct performance was only&#13;
reinforced during the warm up trial. The first test examined the picture condition vs&#13;
the object condition, the second test examined single vs multiple exemplars. The tasks&#13;
were between participants, as they were examining the results of the TD group&#13;
compared to the ASD group, however for the analysis some within participants&#13;
analysis was carried out to determine accuracy between test conditions (e.g. picture vs&#13;
object). Each task always consisted of a warm up stage, referent selection trial,&#13;
distracter familiarisation trial, retention test trial and generalisation test trial. The test&#13;
trials were based on that done by Horst and Samuelson in 2008, with the extension of&#13;
the generalisation trial which was not included in the Horst and Samuelson (2008)&#13;
study.&#13;
Picture Condition vs Object Condition Tests&#13;
Warm Up Stage&#13;
Participants were shown three sets of three familiar objects, in the object condition, in&#13;
the picture condition participants were shown three familiar pictures. Participants&#13;
were asked to identify each in turn, the warm up objects or pictures were&#13;
pseudorandomised per participant, changing the order and location per participant per&#13;
condition. The pictures or objects were removed and reordered after each set, and the&#13;
participants response recorded.&#13;
Referent Selection Trial&#13;
Participants were shown four sets of stimuli (pictures for the picture condition and&#13;
objects for the object condition) the sets of stimuli were different per condition, each&#13;
consisting of two familiar items and one novel item, each set was shown four times,&#13;
the novel referent was shown twice and the two familiar referents once. The order and&#13;
location of the sets was pseudorandomised for each participant, the location of the&#13;
novel object was never in the same location twice consecutively, and a novel or&#13;
familiar object or picture was never requested more than twice consecutively. Sets&#13;
were not presented twice in a row.&#13;
Distractor Familiarisation&#13;
To control for novelty or familiarity preferences in the subsequent test trials, children&#13;
were shown all the novel objects that used in generalisation test trials. The new novel&#13;
objects were a different colour variation of a previously seen novel object, which was&#13;
named in the referent selection trial. Novel objects or pictures were shown against a&#13;
previously named novel objects or pictures, which was not a shape or colour match to&#13;
the new novel object. Objects or pictures were shown so one previous named novel &#13;
object was shown against a new novel object or picture. The objects were not shape or&#13;
colour matched, the objects or pictures were placed in front of the participant, they&#13;
were not asked to identify them just to “look”.&#13;
Retention Test Trial&#13;
Retention trials will assess children’s memory of the newly-learned word-referent&#13;
pairings. Participants were shown four sets; each set was shown twice with the target&#13;
object requested twice. The sets were made up of three named novel objects, names&#13;
were picked from the NOUN database (Horst &amp; Houst, 2016), each made up of two&#13;
syllables, objects or pictures were picked on the basis that participants items that&#13;
would be novel to them, for instance gym or plumbing equipment. Objects and&#13;
pictures which were not shape or colour matches to each other and were shown in the&#13;
referent selection test trial. The order and location of each object or picture per set&#13;
was pseudorandomised per participant per trial. The location of the novel object was&#13;
never in the same location twice consecutively, and a novel or familiar object or&#13;
picture was never requested more than twice consecutively. Sets were not presented&#13;
twice in a row.&#13;
Generalisation Test Trial&#13;
Generalisation trials will assess children’s extension of labels to new items.&#13;
Participants were shown four sets; each consisting of three objects or pictures, each&#13;
set was shown twice with the target object being requested twice. The objects or&#13;
pictures in the sets were shape matches to the objects or pictures shown in the referent&#13;
selection, and retention trials, but different colour variations. All the shape matched &#13;
objects or pictures were also colour matched to a non-shape matched object from the&#13;
previous conditions. The order and location of each object or picture per set was&#13;
pseudorandomised per participant per trial. The location of the novel object was never&#13;
in the same location twice consecutively, and a novel or familiar object or picture was&#13;
never requested more than twice consecutively. Sets were not presented twice in a&#13;
row.&#13;
Single vs Multiple Exemplars Tests&#13;
Warm Up Trial&#13;
Participants were shown three sets of three familiar pictures in both the single and&#13;
multiple exemplar conditions. Participants were asked to identify each in turn, the&#13;
pictures were pseudorandomised per participant, changing the order and location per&#13;
participant per condition. The pictures were removed and reordered after each set, and&#13;
the participants response recorded.&#13;
Referent selection Trial&#13;
Participants were shown four sets of stimuli, the sets of stimuli were different per&#13;
condition, each consisting of two familiar items and one novel item, each set was&#13;
shown four times, the novel referent was shown twice and the two familiar referents&#13;
once. In the multiple exemplar trial, two differently-coloured versions of each&#13;
unfamiliar object were named (one per novel trial for each set). The order of the sets&#13;
was pseudorandomised for each participant. The order and location of each object or&#13;
picture per set was pseudorandomised per participant per trial. The location of the&#13;
novel object was never in the same location twice consecutively, and a novel or &#13;
familiar object or picture was never requested more than twice consecutively. Sets&#13;
were not presented twice in a row. The order and location of the sets was&#13;
pseudorandomised for each participant, the location of the novel object was never in&#13;
the same location twice consecutively, and a novel or familiar object or picture was&#13;
never requested more than twice consecutively. Sets were not presented twice in a&#13;
row.&#13;
Distractor Familiarisation&#13;
To control for novelty or familiarity preferences in the subsequent test trials, children&#13;
were shown all the novel pictures that used in generalisation test trials. The new novel&#13;
pictures were a different colour variation of a previously seen novel picture referent,&#13;
which was named in the referent selection trial. Novel pictures were shown against a&#13;
previously named novel pictures, which was not a shape or colour match to the new&#13;
novel picture. Pictures were shown so one previous named novel referent was shown&#13;
against a new novel picture. The referents were not shape or colour matched, the&#13;
pictures were placed in front of the participant, they were not asked to identify them&#13;
just to “look”.&#13;
Retention Test Trial&#13;
Retention trials will assess children’s memory of the newly-learned word-referent&#13;
pairings. Participants were shown four sets; each set was shown twice with the target&#13;
referent requested twice. The sets were made up of three named novel objects, names&#13;
were picked from the NOUN database (Horst &amp; Houst, 2016), each made up of two&#13;
syllables, pictures were picked on the basis that participants items that would be novel&#13;
to them, for instance gym or plumbing equipment. Pictures which were not shape or &#13;
colour matches to each other and were shown in the referent selection test trial. The&#13;
order and location of each picture per set was pseudorandomised per participant per&#13;
trial. The location of the novel object was never in the same location twice&#13;
consecutively, and a novel or familiar object or picture was never requested more than&#13;
twice consecutively. Sets were not presented twice in a row.&#13;
Generalisation Test Trial&#13;
Generalisation trials will assess children’s extension of labels to new items.&#13;
Participants were shown four sets; each consisting of three pictures, each set was&#13;
shown twice with the target object being requested twice. The pictures in the set were&#13;
shape matches to the picture shown in the referent selection, and retention trials, but&#13;
different colour variations. All the shape matched pictures were also colour matched&#13;
to a non-shape matched object from the previous conditions. The order and location&#13;
of each picture per set was pseudorandomised per participant per trial. The location of&#13;
the novel object was never in the same location twice consecutively, and a novel or&#13;
familiar picture was never requested more than twice consecutively. Sets were not&#13;
presented twice in a row. In the multiple exemplar condition the generalisation test&#13;
trial introduced the shape matched referent in a third colour that was coloured&#13;
matched to a referent of a different shape matched seen in the referent selection or&#13;
retention test trial. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2144">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2145">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2146">
                <text>Smith2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2147">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2148">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2149">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2150">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2151">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2152">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1556">
                <text>Calum Hartley&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1557">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2153">
                <text>Cognitive, Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2154">
                <text>16 minimally verbal children with ASD and 16 typically developing children </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2155">
                <text>ANOVA, Correlation, quantitative, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="44" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1268">
                <text>Sketch Mental Reinstatement of Context: A Comparison of Autistic and Typically Able Children’s Drawings</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1269">
                <text>Mehar-Un-Nissa Masood</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1270">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1271">
                <text>The increasing number of children coming into contact with the criminal justice system is prompting further research into interviewing children. There is a lack of research in the area of children with developmental disorders such as autism (Mcrory, Henry &amp; Happe, 2007). As sketching is one of the domains in which children develop favourably in comparison to their age matched peers it could be utilised in order to gain the most information. Sketch MRC has been used on typically developing individuals and has been very beneficial for a variety of reasons such as; gives structure to narrative, lessens cognitive demand of interviewer and also lessens social demand of interview. This study aims to see whether content and style of the drawings of typically developing and autistic group are similar. Also correlating data in the sketch to data from the interview recall would give insight into how the act of drawing may be beneficial. A group of 30 children who were either typically developing or autistic were split into 3 groups depending on the results of BPVS 3 and RPM. All children watched a film stimulus and were then asked to recall as much information as possible in a sketch MRC condition. The drawings were then analysed.  Autistic children’s sketches when compared with mental ability matched children showed similarities in; number of salient items, number of items drawn, representational detail, detail in human figure drawings, number of correct, incorrect and confabulation as well as accuracy. A regression model indicated correct number of items recalled in verbal transcript significantly predicted the correct number of items in the sketch. By presenting a significant relationship between number of correct items sketched and recalled it can be said the act of drawing is useful in the sketch MRC condition. This indicates that the sketch MRC condition is just as useful for the autistic individuals as it is for the TD individuals.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1272">
                <text>A between subject’s experimental design was employed with two independent variables: Group, with two levels: (i) autistic, (ii) typically developed, and  Mental ability: low, intermediate, and high. The dependent variable was the drawings which were produced during the interview and were coded using a top down coding scheme measuring the number of correct, incorrect, confabulated items of recall and accuracy. Content including representational detail of human figure drawing and what the individual focuses whether it is on people or the environment. Qualitative analysis attempts to uncover a range of issues such as; is structure used in the sketch, whether the sketches depict movement or a still image, the detail with which the items are drawn, and if the sketch demand interaction.  &#13;
&#13;
Materials&#13;
Film stimulus – Each child individually viewed a non-violent crime film exactly one minute in duration. The stimulus film was one which had been previously used in police training sessions. Keeping in mind ethical guidelines the clip shown had no abuse or violence. The film depicted a busy road with a roundabout, two people walk from around the corner and into a shop. Moments later the two individuals run out of the shop with another individual chasing after them. The clip then ends. &#13;
 	The British Picture Vocabulary Scale: Third Edition (BPVS3) is used in order to act as a distracter task but also determine the child’s mental ability. The BPVS3 plays an important role in assessing a child’s receptive vocabulary, from 3 years up to 16 years of age. &#13;
Ravens Progressive Matrices (RPM) is also required not only to act as a distracter task but also determine the child’s mental ability. The RPM is a nonverbal group test ranging from 5 year olds to the elderly. It consists of 60 multiple choice questions listed in order of difficulty. &#13;
iPad was used to show children the film stimulus with approximately a 8-inch screen. The child was able to hold the iPad themselves to watch the film stimulus. &#13;
Procedure &#13;
Each child was individually taken from their class and shown the film stimulus by an assistant teacher. The researcher did not show the clip to the child, as the child was led to believe that the researcher has never seen the clip before. This was done to make sure the child recalled as much information as possible, and did not presume the researcher already knew it all. Once they had watched the entire film stimulus, the child was brought into a different room by the researcher. &#13;
The researcher then began to carry out the BPVS3. When this was completed the child was asked to work through the RPM and complete the 60 questions. This allowed the child and researcher to build a rapport and also acted as a distracter task from the film stimulus. &#13;
The researcher then explained to the child that for the next part of the experiment the child’s voice would be recorded. The child was asked for their permission and if the child agreed the researcher explained that recording was about to begin. The child was then asked to recall as much information about the video clip as possible, and asked to draw what they remember. Once they had began drawing they were then asked about their drawing with questions such as ‘what is it that you are drawing there?’ They were given as much time as required to complete the drawing. &#13;
Once the drawing was completed, the child was asked to tell the researcher about everything they remembered, and told they were free to use the drawing to help them in the explanation. After the child had told the researcher about everything they remembered in a free recall phase, the child was questioned on what they remembered. For example, if the child said there were two people, the researcher would try and gain some in depth information about these people. The child was then thanked for taking part in the experiment and told that their parent or guardian will be given a gift voucher for them to spend. &#13;
Scoring&#13;
The drawings produced by autistic and typically developed children were coded alongside the transcripts from the interview to aid the understanding of the drawings. A similar approach was successfully adopted in Campbell, Sicovdal, Mupambireyi and Greyson (2010) as it minimised the analysts’ subjective interpretation of the drawings. However, the transcripts themselves were not analysed as they form the dataset of another PhD project. The rationale for using the transcripts is to aid understanding of the drawing is offered by &#13;
Each drawing was analysed using a three-step framework (see Fig.1) which started by analysing to what extent sketches represented the event that was witnessed. This was done to determine whether the sketch was successful in depicting the TBR event. The second step involved further analysing the items in the drawing, focusing on correctness. The final step examined representational detail and differences in what groups focussed upon, as well as qualitative analysis.  &#13;
The first step of analysis shed light on the overarching aim of the study and to gain an idea about how the sketches depicted an illustration of the film stimulus. A gross measure of the sketches was taken, which took into consideration the total number of attributes, to give an understanding of how detailed these sketches were. To determine whether the sketches successfully depicted what was shown in the film stimulus, the five most salient aspects of the TBR event were defined as follows: a road, cars, two individuals, shop, and another individual (the victim). One mark was awarded for each aspect depicted in the sketch, giving a possible total completeness score of 5. &#13;
The following step in analysis was to bring to attention correctness scores. Every item drawn in the sketch was determined as correct, incorrect (sketching one person going into the shop instead of two) or a confabulation (sketching a detail that was not present in the film stimulus). Accuracy was calculated by dividing total number of correct items sketched by total number of items. The items were then divided into three groups whether they illustrated people or environment. Using the PhD projects data a correlation is carried out to see whether total number of items  and total number of correct items depicted in the sketch correlates with total number of items and total number of correct items recalled in the transcript. This would help understand how useful the act of sketching rather than focusing on the sketches content.  &#13;
As it was of essence to capture representational details human figure drawings were recognised on their complexity according to Cox and Parkins (1986) classification system of human figure drawings. In this stage data will be analysed qualitatively in order to gain a better understanding of the sketches. &#13;
&#13;
 &#13;
Figure 1. Concepts guiding analysis of drawings.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1273">
                <text>Masood2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1274">
                <text>Nicola Cook</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1275">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1276">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1277">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1390">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1278">
                <text>Dr Tom Ormerod</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1279">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1280">
                <text>Autism</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1281">
                <text>Participants&#13;
	Autistic group – Fifteen children, between 5-16 years, of mixed genders, with autism were recruited from special schools in England. They had been given a formal diagnosis of autism by an appropriately qualified clinician according to current diagnostic criteria; DSM-IV (APA, 1994) and ICD-10 (WHO, 1993). &#13;
	Typically Developing (TD) Group – Fifteen typically developing children, between 5-16 years, of mixed genders, were recruited from a state primary school in England. None of the children were known to have any symptoms associated with autism or Aspergers. &#13;
	To ensure the TD group and autistic group were comparable in terms of their drawing skill, both groups were matched according to their performances on Raven’s Coloured Progressive Matrices (RCPM) (Raven, Court &amp; Raven, 1983) and the British Vocabulary Scale: Third edition (BPVS 3) (Dunn, Dunn, Whetton &amp; Burley, 1997). Descriptive information about participants is given in Table 1. An independent t-test confirmed that autistic and typically developing groups did not differ significantly on RCPM raw scores t(28) = -0.61, p = 0.54. Submitting the BPVS 3 raw score to independent t-test failed to reveal a significant effect of group (t(28) = 0.26, p = 0.78). Thus, the autistic and typically developing groups had overlapping ranges in both the RCPM and BPVS 3.&#13;
	Each autistic child was matched with a typically developing child that had the closest score in both the BPVS 3 and RCPM. For example, an autistic child who had scored 87 and 23 on the BPVS 3 and the RCPM respectively was matched with a typically developing TD child who scored 87 and 22 respectively. Participants were then assigned to one of three groups, depending on how they performed in  the tests. Those who scored lowest were assigned to the low mental ability group, those that scored highest were assigned to the high mental ability group, and those whose which scored in the middle were assigned to the intermediate mental ability group. ANOVA confirmed a significant difference between the three groups in both the BPVS 3 F(2, 27) = 33.90, p&lt; 0.01) and the RCPM F(2, 27) = 6.59, p&lt; 0.05 thereby justifying splitting the groups in such a manner.&#13;
All participants were naive to the experimental aims and hypotheses. Written consent was obtained from parents. Gift vouchers were given to parents as a reward on their child’s completion of the experiment.&#13;
&#13;
Table 1 Means, standard deviations (SDs), and ranges for Raven’s Coloured Progressive Matrices (RCPM) score, and the British Picture Vocabulary Scale (BPVS 3) score for the Autistic and Typically Developing (TD) groups &#13;
Group	N	Mean	Standard Deviation	Range&#13;
RCPM				&#13;
Autistic	15	22.00	7.55	7.00-34.00&#13;
Typically Developed	15	23.6	7.28	7.00-34.00&#13;
BPVS3				&#13;
Autistic	15	118.73	22.95	87.00-159.00&#13;
Typically Developed	15	116.33	27.35	74.00-159.00&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1282">
                <text>ANOVA, </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="42" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1259">
                <text>The Impact of Spatial Locations Involving Schema Representations on False Memories</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1260">
                <text>Ji Yun Gan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1261">
                <text>While numerous studies have investigated the effects of schema on false memories, few have looked at how schematic framework involving spatial locations have influenced levels of true and false memories in different age groups. For this study, two separate analyses were conducted; both analyses required participants to study four environment scenes, which contained schema- consistent objects that were placed in either schema-expected or schema-unexpected locations and schema- irrelevant objects. After each scene, a distractor task was presented, followed by the test scene. In the first analysis, false memory rates were examined by adding objects, which were not present during study, into test scenes; in the second analysis, false memory rates were assessed by shifting schema-consistent objects from a schema-expected to a schema-unexpected location or vice versa between study and test scene. In both analyses, target objects that remained in the same location for both study and test scenes assessed for true memories. Three different age groups were studied; younger children aged seven and eight, older children aged nine and ten, and adults who were university students. Results revealed that overall, adults were more schema-bound, and had significantly higher levels of true memories as well as significantly lower levels of false memories compared to younger and older children. Furthermore, schema-inconsistent objects attracted lower levels of false memories across all age groups. However, objects that shifted from a schema-unexpected to a schema-expected location yielded high false memories for object-location pairing. This study is of particular significance to the field of forensic psychology.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1262">
                <text>Schema, false memory, source monitoring, distinctiveness heuristic, object-location binding.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1264">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1266">
                <text>Rachel Coyle</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1267">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1391">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1393">
                <text>The experiment was programmed using a computer software called Psyscript and was run on a Mac laptop. Four different environments were used during the experiment, which were a kitchen, a living room, an office and a bathroom. For the practice run, a separate image, which was a seminar room, was used. All the photographs used were standardized across all four environments, with each photograph being 1300 x 864 pixels, to ensure that the quality and clarity of each photograph was the same. To every environment image, three different versions were prepared for the study scene, ensuring that all six of the schema-relevant target objects had the opportunity to appear in a schema-unexpected location, a schema-expected location, or not being present at all. Moreover, to every version, two test scenes were prepared, to create a variation between which of the target objects that were initially placed in schema-relevant or schema-irrelevant locations during study phase would be shifted during the test scene. Figure 1a is an example of a bathroom scene during study phase and Figure 1b is an example of the test scene for that version. The program had been set to ensure that the sequence of the four different environment images would be pseudo-randomized for counterbalancing purposes, in which all the scenes were presented once, whereas the versions and test scenes selected were randomized. Moreover, the target objects that were circled during the test scenes were also pseudo-randomized, in which each object would only be circled once. For the practice run, both the study scene and the test scene were presented in a hardcopy form, which was laminated. Two separate slips of paper were prepared, one being “Was this object anywhere in this picture before?” for the participants allocated to the Presence condition, and “Was this object in this place before?” for the participants allocated to the Location condition. The paper slips containing the questions were left on the table for participants to refer to.&#13;
&#13;
Figure 1a The above image depicts version 1 of the bathroom scene. The two target objects in schema-expected locations are the shampoo and toothpaste, whilst the two target objects in the schema-unexpected locations are the mirror and toilet brush, and the schema-irrelevant objects are the file, glove, toy.&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
Figure 1b The above image depicts Test 1 of Version 1 of the bathroom scene. The mirror has now been shifted from a schema-unexpected location to a schema-expected location whilst the toothpaste remains in the same position. The shampoo has now been shifted to a schema-unexpected location. The toilet paper and weighing scale, which was previously not present during the study scene is now present in the schema-expected and schema-unexpected location respectively, with the toilet paper being circled for the participant to respond to. The schema-irrelevant objects that were added were the jacket, pencil case and handbag. &#13;
&#13;
Design:&#13;
This study consists of two analyses; to address the two research questions. First regarding whether location affects true and false memories, and second, to see what shifts in location do to memory for the original object memory (condition 1) and object-location pairing (condition 2). The first analysis investigates the true and false memories involving objects that were present and not present at study, whilst the second analysis investigates the true and false memories toward objects that were present at study and were later shifted during the test scene. Hence, a mixed ANOVA design was used to address the first of these questions. The within-subject independent variables include the study (present, not present), and the schema appropriateness of the object location (schema-expected, schema-unexpected, irrelevant). The between- subjects factors include the conditions (presence, location) and age groups (younger children, older children, adults). The “yes” responses for the objects that were present in both scenes but not shifted and objects that were not present during the study scenes but were present in the test scene were analyzed. &#13;
For objects that were shifted during the study and test scenes, the within-subjects factor was schema (schema-expected, schema-unexpected), and the shifting of objects (shift, no shift). The between-subjects factors include the conditions (Presence, Location) and age groups (younger children, older children and adults). The dependent variable was the accuracy of responses given, to compare the difference between objects that shifted and objects that did not shift. &#13;
Procedure:&#13;
The experiment consisted of a study phase, a distracter task and a test phase, which took an estimated 10 minutes to complete and was conducted in an unoccupied learning classroom, in the Burnley Primary School, whereby participants were individually tested. Each participant was required to undergo a practice run before the actual experiment took place, to ensure that the participant had understood what he or she had to do. In the practice run, the laminated image of the seminar room was presented alongside the paper slip with either the Presence question or the Location question, depending on which condition the participant had been assigned to. The participants were given 12 seconds to study the image. After 12 seconds, the participant was presented with another image with several target objects circled, in which the objects would be pointed to one by one by the researcher. The participant would then be prompted to verbally respond if they had either seen that object anywhere before during the study scene or if that object had been in that location before during the study scene. For both conditions, the participants were instructed to press either the “Y” or “N” key on the keyboard in response to whether they had seen the circled object anywhere in the picture before during the study phase (Presence condition), or if they had seen the circled object in that particular location before were it the Location condition. Once the participants acknowledged that they had understood, they were presented with the actual experiment.&#13;
Each participant was required to study four different environments, in which one of three versions would be selected for every environment. Each study scene would last for 12 seconds for the participant to study, and then a distracter task would immediately appear. The distracter task, which lasted for 30 seconds, required the participant to hit any key on the keyboard whenever a specified animal (eg: giraffe, frog, hippopotamus) appeared. A green tick would appear every time the participant successfully presses a key before the specified animal disappears. Once 30 seconds was up, the distracter task would end, and one of two of the test scenes for that environment would appear. A total of twelve objects would be circled sequentially, with the next object only being circled 0.5s after the participant had given a response. Depending on which condition the participant was in, once every object had been circled, the participant would be required to respond to the question “was this object anywhere in this picture before?” (Presence condition), or “was this object in this place before?’ (Location condition). If the participant, who was in the ‘presence’ condition, deemed that the object was somewhere in the picture before, he or she would respond by pressing the “Y” for Yes on the keyboard; if it was deemed to not be in the picture before, the participant would then press the “N” for No on the keyboard. The same thing was conducted for the Location condition. Once the participant had responded to all 12 objects, a different environment scene would appear and the participant would be required to repeat the process until all four scenes had been shown. </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1392">
                <text>A total of 155 participants, representing three different age groups, took part in this research study. The three age groups consisted of younger children aged seven and eight, older children aged nine and ten, and adults, which were university students. 40 older children took part in the Presence condition (mean age=9.52, SE=0.08; 16 males, 24 females) and 38 older children took part in the Location condition (mean age= 9.47, SE=0.08; 10 males, 28 females). As for the adults, 18 university students took part in the Presence condition (mean age=19.67, SE=0.21; 4 males, 14 females) and 18 university students took part in the Location condition (mean age=19.94, SE=0.25; 4 males, 14 females) . For the younger children group, there were a total of 22 participants in the Presence condition (10 males, 12 females; mean age= 7.32, SE= 0.10) and 19 participants in the Location condition (9 males, 10 females; mean age= 7.32, SE= 0.11). The participants for the younger children group were recruited from a school located in Burnley. As the participants were all below the age of consent, consent forms were given to the participants’ parents as a means to indicate that they have allowed their child to participate in this study. This research was given approval by the Psychology Department Ethics Committee, which adhered to both the British Psychological Association and the American Psychological Association’s guidelines.</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="35" public="1" featured="1">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1099">
                <text>Analogical transfer beyond the analog</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1100">
                <text>Radhika Kuppanda</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1101">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1102">
                <text>Analogical problem solving involves transferring the method used to solve the base analog onto the target analog based on the structural similarity they share. Studies have found that Experts have no difficulty in solving domain specific analogical problems. While, novice problem solvers fail to solve such problems due to their difficulty in retrieving the base analog. Failure to recollect the correct base analog forces the problem solver to solve the problem in an act first think later manner.  They use number of maximizing moves within the problem space to reach the goal state quickly. Use of such maximizing moves in solving analogical problems leads to an impasse, while alternative moves must be sought out. The current study tries to overcome the problem of retrieval of the correct base analog, by implementing an additional factor termed as extra constraint in solving analogical problem. These extra constraint acts in a manner which inhibit the problem solver from choosing problem moves that aim to maximizing their progress to reach the goal state which must essentially be avoided in analogical problem solving tasks. A secondary aim focuses on examining if there exists’ any difference between an adolescent problem solver and adult problem solver. Method: A total of 64 Participants within the age group of, 12-15 and 18-21 years were administered three problems (2 analogical and 1 non analogical). Results: Results demonstrate that the predictor variables (age or money) were not able to predict that participants from the older age category would perform better than the younger age group on any of the problems. Based on second aim, results showed that the older age group able to solve more problems successfully than the younger age group.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1103">
                <text>analogical transfer&#13;
insight problem solving&#13;
extra constraints &#13;
developmental differences&#13;
maximization of progress</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1104">
                <text>The test materials consisted of paper and pencil tasks (see appended booklet). Each Participant was provided with a booklet which consisted of a set of 5 problems, comprising three experimental tasks and two filler tasks.  The first problem was the analogical source problem (sheep dog problem), followed by a filler task (anagram solution). The second problem was the transfer problem (9 ball problem), followed by a second filler task (algebra solution). The last problem was the non-analog problem (cheap necklace problem). There was space provided under each of the problems to allow the participant to work out the solution to each problem. Solutions to each of the problems were also given for the participants. &#13;
&#13;
&#13;
Design and Procedure&#13;
&#13;
The study design comprised of a two between-subjects factors. The first factor is Age (12-15; 18-21 years).  The second factor is Resource (£8 vs. £12). The dependent variable was the number of correct solutions. The aim of the research was to assess whether to two predictor variables, age and money would predict whether the participant would solve the problem correctly or incorrectly. &#13;
&#13;
As per the BPS rules, confidentiality and anonymity of participants were strictly maintained. The study was conducted in a classroom setting with 16 participants being administered the problems at a time. Each participant from each age group was first assigned to low or high resource conditions. 50 % of the participants from older and younger age group received low resource condition (8 pounds) and other 50% high resource condition (12 pounds). Participants received the booklet containing the 3 problems and 2 filler tasks. Each participant was given 5 minutes to attempt each problem.  After five minutes, the solution to each problem is shown. The problems contained in the booklet are as follows:&#13;
¥	Source problem (killer dog)&#13;
¥	Filler task(anagrams)&#13;
¥	Transfer problem (ball problem- £8 or £12 versions); &#13;
¥	Filler task(algebra)&#13;
¥	Non-analogical problem (cheap necklace).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1105">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1106">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1107">
                <text>Kuppanda2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1108">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1109">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1110">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1111">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1112">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1113">
                <text>Tom Ormerord</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1114">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1115">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1116">
                <text>The study was conducted on a total of 64 participants divided into-&#13;
Adolescents (12-15 years) - comprised of 32 participants recruited from schools.&#13;
Adult age group (18-21 years) - comprised of 32 participants recruited from colleges. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1117">
                <text>logistic regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="31" public="1" featured="1">
    <fileContainer>
      <file fileId="84">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d9ec28d2595cae82a23d00f217468f9b.doc</src>
        <authentication>0b3f1388984a2d5a7508900b80476211</authentication>
      </file>
      <file fileId="85">
        <src>https://www.johnntowse.com/LUSTRE/files/original/74fc7eead2f61c385212a7bae93eff2a.txt</src>
        <authentication>d6d530c5d70a86ab26cc60e890ba0a43</authentication>
      </file>
      <file fileId="86">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e05a0d5300b408575310f0f4b2cd424b.csv</src>
        <authentication>8dd217dfaef24c4c9a41f8b2ee5a1738</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1024">
                <text>Training Transfer Between False-belief, Card Sorting and Counterfactual Reasoning in Children with ASD.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1025">
                <text>Amna Ahmed</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1026">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1027">
                <text>Previous training studies for typically developed (TD) children and children with Autism Spectrum Disorder (ASD) show that theory of mind and executive functions are two interrelated domains, and that training in one task could lead to improvement on the other. This training study aimed to examine the developmental relationship between three domains (Theory of Mind (ToM), Executive Functions (EF) and Counterfactual Reasoning (CR)) in children with ASD. A group of 30 children diagnosed with ASD were randomly allocated to one of three training groups, each group received training in one of the three domains stated. After training, the entire sample was tested to measure for improvements. Results indicate that ToM training leads to improvement on the EF and CR tasks, while EF training did not lead to ToM improvement and CR training did not lead to EF improvement. Findings are discussed and a novel cognitive model is proposed to account for the observed outcomes. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1028">
                <text>ASD, Training study&#13;
Domain general&#13;
Theory of Mind&#13;
Counterfactual reasoning&#13;
Executive Functions</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1029">
                <text>Following the design of Kloo and Perner (2003), first children were pretested. The pretest involved measures of verbal and nonverbal ability, two false-belief tasks followed by a card sorting task and two counter-factual reasoning tasks. The pretest was scored to create a baseline for the participants' abilities in each of the areas assigned to the training groups. Children were then randomly assigned to one of three experimental training groups. Each group was given two sessions of training (approximately 1 week apart) on one of the three areas; false belief, counterfactual reasoning or DCCS. A posttest was given a week after the second training session, it was similar to the pretest in design but different materials were used. The posttest was given to the children to measure any improvements in performance after training and examine any crossover effects between the different training groups. Finally, the children were given a follow-up test (approximately 6 weeks after the posttest) to investigate if the effects of training are lasting. All of the sessions took place in a quiet room in the child's school.&#13;
&#13;
            Procedure and Materials&#13;
Pretest and posttest. Both sessions that preceded and followed the training sessions involved tasks measuring performance in false belief, counter-factual reasoning and card sorting.&#13;
False-belief. One of two traditional unexpected transfer tasks was administered on the pretest based on Wimmer and Perner (1983), modeled after Baron-Cohen et al.’s Sally-Anne task (1985). A scene was enacted to the child using wooden toy figures and a kitchen model in which an item is unexpectedly transferred during the protagonist's absence. The stories where altered slightly to be more fitting to the knowledge of a Bahraini child by changing character names and making other alternations where appropriate. However, the main consciences of the stories remained very similar to the original stories. After the story is told, the character returns to the scene and the child is then asked a false-belief test question such as 'where do you think Ahmed will look for his teddy bear now?' followed by two control questions (memory and reality). One of the two stories was administered in the pre-test and the other in the post-test. &#13;
The false-belief pretest and posttest also included an unexpected content task, another task modeled by Wimmer and Perner (1983) as a measure of false-belief. In this task the child was presented with a closed familiar container (such as a Band-Aid box) and then the child was asked to guess the content of the box. The item in the box was then revealed to the child (a coin, for example). Next the item was placed in the closed box again and the child was asked 'what did you think was in the box before I opened it?' The correct answer should be Band-Aids, but most children with ASD find difficulty in suppressing the reality of what they know to be in the box so the answer they give is ‘a coin’. The child was then asked about another person’s state of mind 'what will (name another child) think is inside the box?’ Finally, the child was asked a memory control question 'what is really in the box?' &#13;
&#13;
Card Sorting. Following the false-belief task, the child was presented with a dimensional change card sorting task (DCCS; Frye et al., 1995). One set of cards (5cm x 10cm) was used as well as two target cards (a blue house and an orange car) to be placed on two sorting boxes (12cm x 16cm). The card set had 12 testing cards (6 orange houses and 6 blue cars). The task involved two phases, in the pre-switch phase the participant was asked to sort the cards according to shape. After completing six trails successfully, the examiner explained to the child that now the rules of the game will change and the child was asked to sort the cards according to colour rather than shape in the post-switch phase. &#13;
Counterfactual Reasoning. Lastly, the pretest and posttest sessions included two counterfactual thinking tasks based on Beck et al. (2011). One of the tasks in each session was enacted using wooden figures and materials such as doll sized bed, cabin, teddy bears or pets. The second task was presented using a picture story consisting of three panels illustrating the events of the story. In these stories, both enacted and illustrated, a series of events lead to a specific end state. For example, the character picks flowers from the garden and places them in a vase on the table. Then the child is asked 'if Zainab had not picked the flowers where would they be’? Two control questions (memory and reality) followed. Similarly to the false-belief task, some alterations where made to the stories where appropriate to accommodate the child's environment and imagination.  The use of two different methods of delivery for the counter-factuality task was introduced to create more variation in the understanding of counterfactual reasoning and to distinguish this task from the false-belief task. &#13;
Training&#13;
Following the pretest, the participants were assigned to three experimental groups each receiving two training sessions in one of the three areas; false-belief, counterfactual reasoning and DCCS. The aim of the training is to provide the children with explanations and feedback based on performance. &#13;
False-belief training group. In each of the training sessions, the false belief group received two of four Ernie-says-something-wrong tasks (renamed to Ali-says-something-wrong) (Hale &amp; Tager-Flusberg, 2003), one unexpected transfer task different from the tasks administered during the pre and post-test sessions, and finally one unexpected content task. &#13;
Ali-says-something-wrong. As in the original Kloo and Perner (2003), the task was presented with the aid of three puppets. In each of the stories Ali carried an action towards one of the puppets but then stated that he did it to another puppet. In each training session the child received two of the four original stories followed by a question about the content of Ali's statement and about the conflicting reality. The other two stories where then administered in the following session.&#13;
Unexpected transfer. The training sessions also included one story about an item being unexpectedly transferred in the protagonist's absence following Baron-cohen et al. (1985). The stories was enacted using wooden dolls and doll house furniture. This training task aimed to teach children about the main aspects of an unexpected transfer and to gradually guide them towards considering the character's false belief (Kloo and Perner, 2003). &#13;
Unexpected content. This task is presented using a different box and content for each test and training session. Examples of the materials used are a smarties tube, a pringles box, a crayons card box. The training of this task aimed to help the child understand his own false-belief as well as others’ state of mind.  &#13;
&#13;
DCCS training group. The card sorting group was given training in two DCCS tasks in each of the training sessions. Both tasks involved sorting according to colour and number, and the switch was always from colour to number. The two tasks administered were the three dimension switch and the transfer sorting task. &#13;
Three dimension switch. In this card sorting task, the participant was presented with two target cards (one yellow house and two green houses) placed on a sorting box. The test cards were similar to the target cards on one dimension; either colour or number (two yellow houses, one green house). The child had to sort by colour, then number, then by colour again and finally by number one last time. Two sets of cards were used, one for each training session. The experimenter helped the child identify each dimension after each switch was made and the rules of the game were covered again. Each switch involved six trials. &#13;
Transfer sorting task. Here, the target cards remained the same as the previous task (one yellow house and two green houses) but a new test card that is only similar to the target cards on one dimension (two yellow cars) was introduced. The test cards was supposed to be sorted according to the dimension stated by the experimenter, starting with colour then switching to number.&#13;
&#13;
Counterfactual reasoning training group. Counter-factual reasoning tasks and false-belief tasks are interchangeable in some studies by asking questions testing both skills following a single story. However, in this study, the training groups had to receive different stories, followed by questions that only tap on counterfactual thinking in order to distinguish it from false-belief training. The purpose of this divide in training is to ensure that each experimental group receives training that does not overlap with the other groups' as the study aims to ultimately measure the crossover effects. The CR group received two tasks in each training session. Like the pretest and posttest, one of the tasks was enacted using figures and the other was presented as a picture story. The stories are based on Beck et al. (2011) and Guajardo and Turley-Ames (2004).&#13;
Figure stories. Following Guajardo and Turley-Ames' (2004) counterfactual thinking tasks, the children were shown a story, presented using wooden dolls, in which an event occurs (usually as a consequence of an action taken by the protagonist) and the child was asked to generate alternative scenarios that would have prevented the occurrence of that event. For example, the character is drawing a picture using pencil colours when the colour breaks and a result he cannot finish his drawing. The question following this story is 'what could the character have done so that he would have drawn the rest of the picture?' and the child is to give as many responses as he/she can generate. Other scenarios include avoiding breaking a glass, keeping their clothes clean, taking a nap leading them to miss their favorite show, and someone eating the character's last chocolate bar. In the training sessions, the examiner walks the child through the logic of having different actions leading to alternative endings. &#13;
Picture stories. The second task in the counter-factual training involved a single picture story based on Beck et al (2011). The images were digitally drawen using Adobe Illustrator and the stories showed a sequence of three square panels. However, the question format following the stories differed from the task given using figures. In the picture stories task, the child is presented with a simple story of consequential events followed by a question about where someone or something would have been if a certain event had not occurred. For example, one of the stories showed a cat napping on top of a car, the cat then spies a bird flying by and chases the bird all the way to the traffic light. The question associated with this story is 'if the cat had not spied the bird, where would the cat be?' Similar illustrations include a man receiving a call to meet a friend, a girl picking flowers, a drawing flying out of an open window and a man who gets sand on his shoes. The training aims to allow the child some insight on how an occurrence could alter the course of events resulting in certain outcomes, and thus if the occurrence had not taken place we would be presented with a counterfactual state.    &#13;
&#13;
Follow-up test. The follow up test was added to the experiment to measure whether children with ASD maintained any effects gained from the training past the posttest. Therefore, this test was similar to the pretest and posttest in design; it included a false belief task, a card sorting task and two counter-factuality tasks. However, the materials and stories used were all different from those used previously in the tests and training. The follow-up test took place 6 weeks after the post-test session. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1030">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1031">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1032">
                <text>Ahmed2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1033">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1034">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1035">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1036">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1037">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1038">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1039">
                <text>Charlie Lewis</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1040">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1041">
                <text>Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1042">
                <text>Participants were 30 children with ASD (2 girls, 28 boys; M age = 6,5 years, SD = 24 months). Children, recruited from special education schools in Bahrain, received a diagnosis of ASD by a team of qualified educational psychologists either based on DSM-IV or CARS II and OWL</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1043">
                <text>ANOVA&#13;
mixed effects analysis&#13;
 t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="29" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="988">
                <text>Competence and Warmth: How Gender Impacts Perceptions of Male and Female Speakers.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="989">
                <text>Jayne Summers</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="990">
                <text>2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="991">
                <text>Using the stereotype content model as a theoretical background, this study aimed to investigate the relationship between gender stereotypes and judgements of warmth and competence. Visual appearance has long been used to research these judgements while auditory cues have often been overlooked. This study therefore focused on judgements made about voice and subsequently did not influence participants with predetermined gender labels. 61 participants – aged 19 to 60 – listened to either 2 male or 2 female speakers talk about domestic violence and cancer research. Domestic violence is here defined as a women-centric topic, while cancer research is considered gender neutral. Participants completed person perception inventories of each speaker, rating them on 7-point Likert scales in terms of 10 competence and 10 warmth items. They also completed a sexism inventory to determine whether sexism predicted a more favourable attitude toward male speakers. A 2 between gender (male vs female) by 2 within topic (domestic violence vs cancer research) ANOVA was conducted, and female speakers were judged as more competent than males when speaking on domestic violence but not cancer research. They were considered warmer than men in both cases. This indicates that women are seen as competent when speaking on issues that directly affect them, suggesting that they should be taken more seriously when speaking out about their own rights. However, traditional warmth stereotypes regarding women were upheld. This, along with further implications, are discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="992">
                <text>gender&#13;
stereotypes&#13;
competence, warmth&#13;
stereotype content model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="993">
                <text>Items. 10 competence items and 10 warmth items were selected to compile a 20-item list of characteristics for participants to judge speakers on. Of these items, 11 were taken from Rudman &amp; Glick (1999) and the remaining 9 were considered in the original SCM. Items used in the competence and warmth scales were found to be reliable across speech topics, namely cancer&#13;
research (CR) and domestic violence (DV) (competenceCR α = .893, competenceDV α = .931). This indicates that the scales used were highly reliable. Similarly, for the warmth dimensions, Cronbach's Alpha was suitably high (WarmthCR α = .918, WarmthDV α = .944). The reliability for the sexism inventory was also acceptable, with an α value of .826. An example of several competence and warmth dimensions can be seen below, while a full list can be found in Appendix A. Competence: confident, ambitious, intelligent.. Warmth: trustworthy, likeable, supportive.&#13;
Speeches. Two speeches were recorded for the purpose of the experiment, one focused on domestic violence and the other on cancer research. The speeches were written to closely match each other in terms of wording and the information being presented. For instance, the opening and closing sentences of each speech were similarly structured, as seen below.&#13;
Table 1. Examples of speech text.&#13;
Domestic Violence Cancer Research&#13;
Opening sentence &#13;
Domestic Violence. A topic that is often glossed over as something that effects other people - not me; not you. &#13;
Cancer. A topic we don't often like to think about – something that effects other people, but not me: not you.&#13;
Closing sentence &#13;
By going to our website www.dvrefuges.co.uk you can find out more information about the great work women's refuges around the country do, and help them continue to change women's lives by donating to our cause. &#13;
By going to our website www.ukcancer.co.uk you can find out more information about the great work that we do, and by donating to our cause, help us continue to help people diagnosed with cancer live a normal life.&#13;
The details of the speeches differed, and the content was varied enough so as not to be obviously the same to participants, but the speeches were largely similar, as can be seen in Appendix B.&#13;
Four speakers were responsible for recording the two speeches, a male and female speaker for each topic. This allowed participants to hear both speeches either spoken by two male or two female speakers. All four speakers were from the same region and had northern accents, however, two speakers' accents differed slightly from the remaining two, which may have been particularly noticeable to northern participants. To account for this, one speaker with each accent was assigned to each topic condition and so any accent effects were counterbalanced and can be assumed to not have influenced judgements.&#13;
Speeches were recorded using an iPhone 6 microphone and edited using Audacity in order to eliminate background noise and static. Recordings were then given a plain video image of a black background with text reading either 'Recording One' or 'Recording Two' respectively. Due to the fact that recordings were counterbalanced across conditions, all four recordings were presented either as first or second in at least one condition, so in total 8 versions of the recordings were made and embedded into Qualtrics, where the body of the survey was hosted. Participants listened to recordings using Sony headphones during the experiment.&#13;
Procedure&#13;
Participants were assigned to one of four conditions. In each condition they were asked to listen to the first speech, either domestic violence or cancer research, spoken by either a male or female speaker. After listening to the speech, they proceeded to the next online page and completed the speaker evaluation, rating the speaker on the 20 warmth and competence&#13;
dimensions. This was indicated by how well they believed each item fit the speaker by choosing a point on a 7-point Likert scale (1 = completely disagree, 7 = completely agree). Following this they listened to the second speech spoken by a different speaker of the same gender. They then completed the same speaker evaluation for the second speaker. Finally, they completed the sexism inventory (The Ambivalent Sexism Inventory, Glick &amp; Fiske, 1996) which measured the participants' explicit sexist attitudes on a 5-point Likert scale (1 = strongly agree, 5 = strongly disagree). A copy of the items in this inventory can be found in Appendix C. As this was a 2 (gender: female vs. male) x 2 (topic: domestic violence vs. cancer research) experimental design with repeated measures on the second factor, the difference between each condition was purely the order in which the speeches were presented (domestic violence first or second) and the gender of speaker that each participant heard (male or female) for the purpose of counterbalancing. So as not to influence participants to respond in a set way, the experiment was presented as regarding the evaluation of speakers and not as explicitly about gender.&#13;
Following the main section of the experiment, participants were asked a number of questions regarding how they experienced the recording, the first of which was answered on a 5-point scale (1 = strongly agree, 5 = strongly disagree). The question was: 'how likely are you to visit the website mentioned in this speech.' This was relevant in order to measure whether the competence of the speaker affected the likelihood of the participant to engage with the issue. Importantly, participants were also asked whether they considered each topic to be masculine or feminine, again measured on a 7-point scale (1 = feminine, 4 = neither feminine nor masculine, 7 = masculine). This was included in order to provide validity to the assumption that the domestic violence topic would indeed be judged as more women-centric, and the cancer research topic would be neutral. It is therefore of note that over 50% of participants considered domestic&#13;
violence to be a feminine topic, others considered it gender neutral, but very few considered it a masculine topic. The majority of participants judged cancer research as gender neutral, as was intended.&#13;
Finally, participants were asked whether or not they had any experience of the topic at hand, either personally or from a friend or family member, as this may have caused them to make more favourable judgements towards the topic they were more invested in. Participants also gave their gender, nationality and age. Gender and nationality were exploratory variables of particular interest due to the belief that other women may be more likely than men to evaluate women as competent. Nationality was of interest due to the fact that people from other cultures, particularly Eastern cultures, have different gender roles than we do in the UK, and so their responses during the experiment may have reflected this. Once the experiment was complete participants were fully debriefed and had the chance to enter a competition to win a prize in return for their participation.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="994">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="995">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="996">
                <text>Sumners2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="997">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="998">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="999">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1000">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1001">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1002">
                <text>Tamara Rakic</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1003">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1004">
                <text>Social Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1005">
                <text>61 participants (14 male, 41 female, and 6 non-binary people) with an age range from 19 to 60 (M= 24.95, SD =9.63), were recruited through opportunity and snowball sampling</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1006">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="27" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="950">
                <text>The Effects of Schema-typical and Atypical Contexts on Memory for Brand Names of Products</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="951">
                <text>Thanita Soonthoonwipat</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="952">
                <text>2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="953">
                <text>The memory for an advertisement can be affected by the way it is constructed. In general, the more distinctiveness, the better memory performance. Traditionally, it has been assumed that the whole memory episode will be better remembered if it is featured by any odd element(s) because it is more attention-demanding and creates stronger memory traces. However, recent evidence suggests that the distinctiveness effect might not spread to everything; it might only affect those distinctive elements without necessarily affecting their linkages with other elements. Accordingly, regarding the advertisements, the memory for each element can be diverse. We manipulated the distinctiveness effect by composing products with schema-typical contexts (undistinctive condition) and schema-atypical contexts (distinctive condition). Participants observed 20 advertisements; 10 were schema-typical and another 10 were schema-atypical. They then completed recall and recognition tests which allowed us to explore how far the distinctiveness effect could extend. We found that only product recall and recognition in the schema-atypical condition were robustly enhanced, other variables were not significantly affected. These findings went against the traditional view and conform with the recent research. We discussed that, in the schema-atypical condition, the products and their contexts made each other distinctive, hence, they were better remembered. In contrast, the brand names and product-brand bindings were schema-neutral, thus, they did not receive more attention and not better remembered. The results were further interpreted to form some practical implications that improve advertising effectiveness.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="954">
                <text>Distinctiveness effects&#13;
Schema&#13;
Memory&#13;
Product recall&#13;
Product recognition&#13;
Brand recall&#13;
Brand recognition&#13;
Product-brand binding</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="955">
                <text>The stimuli were 40 newly constructed print advertisements (in digital format). Print advertisements were employed because they allow the better experimental control (Keller, 1987). A half of these advertisements belonged to toiletries category (i.e. shampoo. sunscreen, and toothpaste), whereas another half belonged to foods category (i.e. pizza, sandwiches, and fried chicken). For each category, there were 10 types of products. For each product, there were two versions of its advertisement; schema-typical and schema-atypical (but only one of which was viewed by each participant). The schema-typical advertisements referred to the ones in which the product was bound with an expected context (i.e. a toothpaste appearing a bathroom scene), while the schema-atypical advertisements referred to the ones in which the product was bound with an unexpected context (i.e. a toothpaste appearing in a bedroom scene). &#13;
In terms of the stimuli construction, there were three key elements for all advertisements, the first of which was the product, the second was the background or the scene illustration which was considered as the context of that advertisement, and the last element was the brand name. The first two elements were to form advertising pictures, and all together with the third one were to form complete advertisements. The researchers purchased stock images from Shutterstock website (https://www.shutterstock.com). The images purchased (product shots, backgrounds, and decorative elements) were then retouched and composted into the print advertising pictures using Adobe Photoshop (Adobe Photoshop CC 2015). All the advertising pictures were controlled not to include any text so that the only copy presented in each advertisement was its brand name. In respect of brand names, we invented new brand names for all 20 products. Each brand name was controlled to be easily pronounceable. They were names of between one to three syllables i.e. Hans, Raven, and Moana. The brand names, texts in Candara 48-point type, were placed on top of every advertising picture. Figure 1 shows examples of stimuli. Table 1 shows the List of products, brand names, their schema-typical contexts, and their schema-atypical contexts. The illustrations of all 40 advertisements can be found in Appendix A.&#13;
&#13;
&#13;
&#13;
Figure 1. Examples of stimuli&#13;
&#13;
&#13;
&#13;
&#13;
Table 1 &#13;
List of products, brand names, their schema-typical contexts, and their schema-atypical contexts&#13;
&#13;
Product&#13;
Brand name&#13;
Schema-typical context&#13;
Schema-atypical context&#13;
Toiletries Category&#13;
&#13;
&#13;
&#13;
1&#13;
Soap&#13;
Flounder&#13;
Bathroom&#13;
Garden&#13;
2&#13;
Shower gel&#13;
Naveen&#13;
Bathroom&#13;
In the bus&#13;
3&#13;
Deodorant&#13;
Megara&#13;
Bathroom&#13;
Library&#13;
4&#13;
Perfume&#13;
Attina&#13;
Bedroom&#13;
Street &#13;
5&#13;
Sunscreen&#13;
Moana&#13;
Beach&#13;
Kitchen&#13;
6&#13;
Shaving cream&#13;
Hans&#13;
Bathroom&#13;
Office&#13;
7&#13;
Toothpaste&#13;
Pongo&#13;
Bathroom&#13;
Bedroom &#13;
8&#13;
Talcum powder&#13;
Fauna&#13;
Bathroom&#13;
Beach &#13;
9&#13;
Shampoo&#13;
Rolfe&#13;
Salon&#13;
Forest&#13;
10&#13;
Lipstick&#13;
Armoire&#13;
Office&#13;
Cooking table&#13;
Food Category&#13;
&#13;
&#13;
&#13;
11&#13;
Sandwich&#13;
Duchess&#13;
Kitchen&#13;
On the stairs&#13;
12&#13;
Fried chicken&#13;
O’Malley&#13;
Kitchen&#13;
Yoga room&#13;
13&#13;
Yogurt&#13;
Rialey&#13;
Kitchen&#13;
In the bus&#13;
14&#13;
Energy bar&#13;
Gaston&#13;
Sport field&#13;
Bedroom&#13;
15&#13;
Pizza&#13;
Linguini&#13;
Restaurant&#13;
Bathroom&#13;
16&#13;
Pasta&#13;
Tony&#13;
Kitchen&#13;
On the bed&#13;
17&#13;
Soup&#13;
Perdita&#13;
Kitchen&#13;
Gym&#13;
18&#13;
Raw burger&#13;
Gus&#13;
Kitchen&#13;
Study room&#13;
19&#13;
Ice-cream&#13;
Bo Bo&#13;
Street&#13;
Library&#13;
20&#13;
Fresh fruit&#13;
Raven&#13;
Garden&#13;
Bathroom&#13;
In addition, there was an effort to provide the variability of context for both schema-typical and schema-atypical advertisements. To illustrate, for the schema-typical advertisements, regarding the advertisements of toiletries category, from the total of 10 products, six of them were bound with a bathroom scene as their schema-typical context, while another four products were bound with other different schema-typical contexts (i.e. a beach scene for sunscreen). Similarly, for foods category, six products were bound with a kitchen scene as their schema-typical context, while another four products were bound with other different schema-typical contexts (i.e. a restaurant scene for pizza). Furthermore, for the schema-atypical advertisements, all 20 products had their own different schema-atypical contexts. For example, a forest scene was for shampoo, while a Yoga room was for fried chicken. Consequently, despite the effort to make the context of schema-typical advertisements more varied, there was probably more variability for the schema-atypical ones.&#13;
Moreover, regarding the judgement of schema typical or atypical context, it was initially set up based on researchers’ perspective. Then, a pilot study was conducted on five participants where they were asked to judge whether the contexts were schema-typical or atypical for a particular product. All five participants judged each context to be typical and atypical as judged by the researchers, for all products listed. &#13;
Furthermore, we constructed some additional materials to be used in the recognition test which were 20 foils of similar product images and 20 foils of similar brand names. As for the foil product images, we purchased another set of stock images (product shots and decorative elements) to be retouched and composted into another 20 product images as icons in isolation. Each foil was designed after one of the target product images, for example, we constructed the foil image of a toothpaste tube to be paired with the target image of a toothpaste tube. These two images were controlled to look similar in terms of product type and size, but different regarding the product design (packaging and colour scheme). As for the foil brand names, we further invented 20 similar brand names, 10 for toiletries category and another 10 for foods category. All foil brand names were controlled to have the same characteristics as the target brand names; names of between one to three syllables which were easily pronounceable. &#13;
Design and data analysis strategy&#13;
The overall design and the variables. A repeated measures design was employed in this study. The within-subjects independent variable was the advertising context which consisted of two levels; schema-typical and schema-atypical. There were six dependent variables examined in separate analyses. The first three variables were from the recall test including the percentage of correctly recalled products (product recall), the percentage of correctly recalled brand names (brand name recall), and the percentage of correctly recalled product-brand bindings (product-brand binding recall). The first two variables were simply calculated from the number of correct answers divided by the total number of advertisements of each level. These variables were to answer whether the performance of products and brand names recall would be better if the advertising contexts were different from their typical schemas. For the third variable, the product-brand bindings recall, it was calculated based on the number of correctly recalled sets (which were counted when the products were written together with their matching brand names) divided by the number of correctly recalled products. Hence, this third variable was to explore that when people recall the products, how much would they extend their memory to the brand names. &#13;
Likewise, the other three dependent variables were from the recognition test including the percentage of correctly recognized products (product recognition), the percentage of correctly recognized brand names (brand name recognition), and the percentage of correctly recognized product-brand bindings (product-brand binding recognition). Similarly, the fourth and fifth variables were calculated by dividing the correct answers by the total number of advertisements of each level. These variables were to answer whether the performance of products and brand names recognition would be better if the advertising contexts were different from their typical schemas. Also, for the sixth variable, the product-brand bindings recognition, it was calculated based on the number of correctly recognized sets (which were counted when participants picked the right choices of product images and their matching brand names concurrently) divided by the number of correctly recognized products. Hence, this last variable was to explore that when people recognize the products, how much would they extend their memory to the brand names. &#13;
Presentation phase. In terms of experimental design, firstly, 20 advertisements were presented to participants. For counterbalancing purpose, 32 participants were equally divided into four groups (eight participants in each). Each group was bound with a different set of advertisements. Each set consisted of 20 advertisements, 10 from toiletries category and another 10 from foods category. From 10 toiletries advertisements, half of them were the schema-typical advertisements and another half were schema-atypical. From five schema-typical advertisements, three of them had a bathroom as their context, and another two had other typical contexts. The arrangement mentioned above was also applied to the foods category advertisements; three schema-typical advertisements were bound with a kitchen scene, another two schema-typical advertisements were bound with other schema-typical contexts, and five different schema-atypical advertisements. Appendix B shows four different sets of stimulus. However, the actual orders of advertisements presented to participants were not the same as shown in the Appendix B, as all 20 advertisements in each set were then randomly mixed. Hence, the positions of advertisements were different in each set to minimize the order effect. Additionally, all the advertisements were presented on a laptop screen (13-inch MacBook Air) and each of them was shown for 10 seconds, using a timed PowerPoint display.&#13;
After the presentation of stimuli, there was a distractor task for two minutes. Immediately after the two-minute interval, participants were administered a free recall test followed by a recognition test. In addition, to achieve the most appropriate study design, prior to the establishment of the final experiment procedure, we ran a small pilot study to determine a suitable memory interval (the duration of the distractor task). We had two participants (two females, mean age = 25 years) do the pilot study which 10-minute interval was employed, and we found that it led to a ceiling effect for product recognition but a floor effect for brand name recall and recognition. Therefore, we decided to cut down this interval to only two minutes.&#13;
Test phase. For the free recall test, participants were asked to write down every product and brand name which they could remember in the answer sheet. Figure 2 shows the presented slide for the recall test. For the recognition test, we separated it into two subsections; the toiletries subsection and the foods subsection. In each subsection, there were 10 questions referring to all 10 products in that category. Thus, there were the total of 20 main questions in this recognition test. The questions were also presented on the same laptop screen (13-inch MacBook Air). The toiletries-category questions were presented first, followed by the foods-category questions. &#13;
&#13;
Figure 2. The PowerPoint slide used in the recall test&#13;
In respect of recognition test construction, for each question, there were two sub-questions; product question and brand name question. For each product question, there were two choices (A and B) which included the target image of product and the foil of similar product. The right answers were randomly varied between A and B throughout the test. Besides, for each brand name question, there were 20 choices (1 to 20) which include 10 target brand names and 10 foils of similar brand names. For each category, the right answers were different for every brand name question and randomly varied between odd (1, 3, 5, etc.) and even (2, 4, 6, etc.) choices throughout the test. Figure 3 shows examples of recognition-test questions. All the questions can be found in the Appendix C. &#13;
  &#13;
  &#13;
Figure 3. Examples of PowerPoint slides used in the recognition test&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="956">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="957">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="958">
                <text> Soonthoonwipat2017</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="959">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="960">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="961">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="962">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="963">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="964">
                <text>Adina Lew</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="965">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="966">
                <text>Psychology of Advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="967">
                <text>There were 32 participants (18 females, mean age = 26.21 years, range 18-35 years). Eight of them were native speakers of English, while others had English as their second language</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="968">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
