<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=5&amp;sort_field=added" accessDate="2026-05-03T04:02:41+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>5</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="74" public="1" featured="0">
    <fileContainer>
      <file fileId="28">
        <src>https://www.johnntowse.com/LUSTRE/files/original/817c41573a9c56ee11930d194feca1ef.pdf</src>
        <authentication>fec8027de6e092210eb31aa35a2d4d85</authentication>
      </file>
    </fileContainer>
    <collection collectionId="4">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="183">
                  <text>Focus group</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="184">
                  <text>Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1736">
                <text>The Shock Impact: An investigation of attitudes towards the use of shock tactics in charity advertisements.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1737">
                <text>Victoria Meadows</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1738">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1739">
                <text>While the use of shock has been praised for increasing attention, it has also been shown to cause distress and negatively affect the perception of the organization or brand. The use of shock advertising is increasingly popular in the non-profit sector, with organizations using shocking visual imagery to encourage viewers to take action against a cause or increase donations. This study aimed to deepen our understanding of attitudes held towards the effectiveness of this, and uncover attributes that contribute to this. Based on previous research into the effects of gender on advertisement preferences, we also analysed the opinions of male and female participants to unearth preferences for shocking or non-shocking advertisements. Three focus groups were conducted to collect attitudes towards charity advertisements. Participants were presented with six advertisements, split into three categories of health, animal, and child-based charities, each with one shocking and one non-shocking campaign. To compare genders, one focus group contained only males, one only female, and one mixed. It was found that the effectiveness of shock was perceived as higher for health related causes, lower for children’s charities, and mixed for animal causes. There was a difference between males and females in attitudes towards the use of shock in animal based charities, with females engaging more with the non-shocking advertisement, and males with the shocking. Results from this research improve our knowledge of when and why shock should be used in charity advertisements, how it can be used to target certain audiences.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1740">
                <text>Shock&#13;
Advertising&#13;
Gender&#13;
Charity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1741">
                <text>Participants&#13;
Sixteen participants were used in this study, consisting of students attending Lancaster University, with an age range from 20 to 28 years old. The sample had a majority of native speakers (13), with two Romanian and one Panamanian native speaker (English as second language). Participants were collected through opportunity sampling and took part in the study voluntarily. &#13;
This study received departmental approval before data collection commenced.&#13;
Design&#13;
	The study consisted of three focus groups: one containing only females (FGF), and one of only males (FGM) to examine any differences in attitudes between genders, and one of mixed gender (FG1) in order to assess possible conflicting attitudes within the group. Five students participated in the mixed focus group (three males, two females), five students in the female focus group, and six in the male focus group.&#13;
	Focus groups were conducted in a private room and lasted 40-50 minutes.&#13;
Materials &#13;
	The stimuli presented to participants were of existing advertising campaigns released by non-profit organizations in the United Kingdom and United Sates of America. Three ‘non-shocking’ advertisements and three ‘shocking’ advertisements were chosen, with one centered around health, animal cruelty, and child abuse in both categories (Appendix A).&#13;
	‘Shocking’ advertising has been defined by Dahl and colleagues (2003) as something that violates the social norm, including content that is seen as disgusting, obscene, vulgar, morally offensive, or containing sexual references. Using this definition as a guide, the ‘non-shocking’ advertisements were chosen dependent on the lack of these traits and did not include, for example, references to blood or death, obscene gestures, or violence. Adverts released by the National Society for the Prevention of Cruelty to Children (NSPCC), the National Health Service (NHS), and Battersea Dogs and Cats Home were chosen.&#13;
Again, using this definition, we selected ‘shocking’ advertisements for their inclusion of the following shocking traits outlined by Dahl and colleagues (2003). Barnardo’s children’s charity was chosen for it’s obscene image of a distressed newborn baby with a Methylated Spirit bottle in its mouth. The Public Health Service’s Smoke Free advertisement featuring a cigarette that morphs into bloodied guts and tissue was chosen for its disgusting imagery. Lastly, People for the Ethical Treatment of Animals’ (PETA) ad featuring a dead, skinned animal was chosen due to its use of offensive images of harmed animals. &#13;
These were printed out and presented to the participants on paper so they could have a closer look at the advertisements.&#13;
A discussion guide was created to direct the conversation in the focus groups (Appendix B). The guide was designed so to ensure continuity between the groups as advised by Malhotra (2008), helping to tailor the discussion to the topics of the research aims, while also giving participants the opportunity to express their thoughts freely. Following Goulding’s (1998) guidelines, this discussion guide was flexible, enabling the facilitator to ask further questions in relation to what was brought up in conversation.&#13;
Procedure&#13;
	Participants were seated around a table and had access to refreshments throughout the focus group. They were given time at the beginning to get comfortable and talk with fellow participants. Each participant was given an information sheet (Appendix C) that detailed the aims of the research and what they were expected to do. They were informed that they could ask any questions they wish and had the right to withdraw at any point during or after the focus group. Once they had read the information sheet and understood what they were talking part in, participants signed the consent form (Appendix D) to agree to take part in the study. &#13;
	At this point they were informed that the recording would commence. The discussion guide was followed throughout, firstly introducing the topic area that was being covered by the focus group, and encouraging participants to consider advertising in general. Following this they were asked about specifically charity advertisements and any overall feelings they had towards any they have seen. Participants then discussed the advertisements presented to them. Starting with the non-shocking advertisements, participants had time to view and discuss each advert one at a time, where they were asked about its effectiveness and anything they liked or disliked about them. The definition of ‘shocking’ advertisements was then introduced and the procedure was then repeated with presenting one advertisement at a time. Participants were then asked to compare their thoughts on which advertising tactic they thought was more effective and if there was a difference in this between the types of causes that were being advertised and the action that was being asked of the audience, for example a donation or change in behavior. This was done in the same order throughout the groups to ensure consistency across the groups. Lastly any final thoughts from the group were collected and participants were informed that they could email the investigator with any further thoughts they had if they wished. They were thanked for their participation and given a debrief sheet (Appendix E) containing more information of this research into the topic area as well as the contact details of the researcher and supervisor. &#13;
	The recording was then transcribed, and analysed thematically through the use of NVivo qualitative data analysis software, to highlight common themes throughout all three focus groups. This enabled us to compare attitudes held towards the varying types of advertising campaigns, their causes, and any differences between genders.&#13;
Analysis &#13;
	The transcript for each focus group was entered into NVivo (QSR International Pty Ltd. Version 12, 2017) in preparation for thematic analysis. This was designed to uncover themes throughout the focus groups in a systematic way, identifying patterns found in the opinions of the participants. In order to accurately analyse the data, the thematic guidelines proposed by Braun and Clarke (2006) were followed. The transcripts were firstly read thoroughly to ensure a level of familiarity with the conversations. They were then coded in NVivo according to their content through an inductive approach, forming codes from the data at present as opposed to attempting to fit pre-existing framework by past theories, therefore allowing us to broaden our inclusion of the attitudes recorded. The data collected in these codes were sorted into potential themes, ensuring consistency within and variation between the themes. These themes were then re-analysed, making sure they were reflective of the data collected. The final themes were then decided upon.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1742">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1743">
                <text>Text/nvivo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1744">
                <text>Meadows2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1745">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1746">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1747">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1748">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1749">
                <text>Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1750">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1751">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1752">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1753">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1754">
                <text>16 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1755">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="75" public="1" featured="0">
    <fileContainer>
      <file fileId="29">
        <src>https://www.johnntowse.com/LUSTRE/files/original/433bc8b147842b22913688daad5b82c3.pdf</src>
        <authentication>cd8e35e608f8c4e794a24714ed2ede85</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1756">
                <text>Accessing Cortical Hyperexcitatbility and Its Predisposition Using Two Types of Measurements</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1757">
                <text>Flora Zuo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1758">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1759">
                <text>This study aimed to explore in depth about the cortex hyperexcitability. In order to do so, the study will use the pattern glare task and three questionnaires. These three questionnaires include the Cortex Hyperexcitability Index II, Cardiff Anomalous Perceptions scale, and the Multi-Modality Unusual Sensory Experiences Questionnaire. The pattern glare task induces on-spot hallucinations and distortions, while the questionnaires measure the long-term daily unusual sensory experiences one may have experienced. In this study, both the questionnaires and the task measured the same underlying factor, the cortex hyperexcitability. In the sense that it was hypothesized that the predisposition of seizure-like hallucinations and distortions and of daily-based hallucinations and anomalous experiences should be associated in a particular way. The pattern glare task had two blocks in the experiment, one with a blindfold and the other without. They were presented to participants in different orders to counterbalance the order effect. In between the two blocks, the participants answered the three questionnaires. The result of the study showed no significant effect of the blindfold, suggesting that wearing the blindfold for five minutes neither increased the sensitivity of the eyes nor the visual cortex. Most of the relationships between the pattern glare and questionnaires failed to be significant. The investigation on the association between the predispositions of the two types of hallucinations also failed to show any significance, only MUSEQ and pattern glare has a significant correlation. The migraine and migraine with aura groups appeared to be more sensitive to the phosphene phenomena. Their sensitivity, though the results were not significant, could be clearly observed through descriptive statistics. Although the results and findings failed to prove the research hypothesis, probably due to the main limitation of the poorly presented stimuli, the current study to some extent was able to expand the current understanding of cortex hyperexcitability demonstrated by the previous works, and further offered more possibilities for future studies.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1760">
                <text>Along with the pattern glare task, there were three more questionnaires used in the study, these are MUSEQ (Mitchell et al., 2017), CAPS (Bell et al., 2006), and CHI II (Fong et al., in press). This study has been ethically approved by the Department of Psychology in Lancaster University on 11th May 2018. &#13;
Participants&#13;
The current study screened participants before they could take part in the experiment, the screening standard is whether they have been diagnosed with photosensitive epilepsy, epilepsy, or that they recently had a brain or eye surgery. This criterion was created as the viewing of the striped pattern of particular spatial frequencies may induce seizures in patients with photosensitive epilepsy (Wilkins et al., 1984).&#13;
It turned out none were excluded due to disease or had a history of diseases. There was a total of 43 participants who took part in the study. Among them, 15 were males and 28 were females. The age ranged from 19 to 36, with a standard deviation of 2.92, and around half of the participants were native English speakers. The six participants who self-reported having a migraine or a migraine with aura were noted before the study, as the pattern glare task may induce or intensify their symptoms, which can cause visual discomfort, visual distortions, or a headache. Among the six participants who reported they had migraineur, three of them were migraineurs with aura.	&#13;
Stimuli and Procedure&#13;
The current study used stimuli that were printed onto cards, and the stimuli were presented to the participants from around 50 cm away at eye level. The patterns were the same size at 20mm * 15 mm, all in black and white, and with the shape of the ellipse. According to the given conditions, the visual angle was calculated to be 12.84 degrees. The three questionnaires were all printed on paper, and the participants were asked to read aloud their answers instead of writing it down. The plain black blindfold which participants were asked to wear during the study was bought from the drugstore.&#13;
Material&#13;
There were three different patterns used in this study, the spatial frequency gratings for these patterns are 11 cpd (cycles-per-degree), 3 cpd, and 0.7 cpd respectively. All the patterns were achromatic, with a fixation dot in the centre of them. After each of the stimulus was presented, there were 17 questions which the participants had to answer. The questions asked about the intensity of the anomalous visual phenomena, the types of visual hallucinations, and whether they have a headache or dizziness after the experiment. The materials were adapted from the previous works of Braithwaite et al. (2014). The three questionnaires in between the two blocks of stimuli presentations were MUSEQ, CAPS, and CHI II. MUSEQ (Mitchell et al., 2017) has 43 items of six factors, including Auditory, Visual, Olfactory, Gustatory, Bodily sensations, and Sensed presence; the measurement is a five-point Likert scale which targets the frequency of USE. CAPS (Bell et al., 2006) has 32 items, also addressing the anomalous experiences from different modalities. For each item, if the participants confirmed that they have had related experiences, they then rated their experiences out of three five-point scales on distress, intrusiveness, and frequency. CHI II has 30 items, and each one will be questioned about its frequency and intensity. The measurement is a seven-point Likert scale, with zero as never or not intense and six as all the time or extremely intense. The questionnaire is the recently updated version of the original CHI, and the 30 items in it can be loaded onto three non-overlapping factors, includes heightened visual sensitivity and discomfort (HVSD), aura-like hallucinatory experience (AHE), and distorted visual perception (DVP). &#13;
For the MUSEQ and CAPS, the original unrevised questionnaire was used during the experiments, however, only parts of the answers given was used in the analysis.  This decision was made as the data analysis would be too complicated to take all the factors into consideration, especially when they are just partially related to the research question. Therefore, for the MUSEQ questionnaire, only Visual, Auditory, and Bodily modality were analysed, and for CAPs, the primary concern is exclusively about the temporal lobe experience factor.&#13;
For the non-blindfold block, all three stimuli were presented; but for the blindfold block, only the medium and high CPD stimuli were included. The low frequency stimulus is excluded because it was too mild to induce any hallucination on the participants. Including it in the blindfold condition is more for its suggestibility; participants who have given a high rating for the low frequency stimulus may produce unreliable scores on the other measures as well (Wilkins et al., 1984). Therefore, participants with too high low PG value would be excluded from the analysis.&#13;
Procedure&#13;
Prior to the experiment, the participants were asked to sit in a specific spot where the distance between them and the stimuli was fixed at around 50cm. Then they were given the information sheet and consent form, which contained the information they needed to know in order to proceed with the study. On the consent form, there was a list of questions asking about specific medical conditions including epilepsy, photosensitive epilepsy, neuro and eye surgery, and migraine and migraine with aura. The researchers then confirmed that the participants did not suffer or had suffered from those conditions before the experiment could take place. &#13;
The first phase of the experiment was the pattern glare test that comprised of two blocks, one with the blindfold and the other without. Participants were labelled with a number which was used as their order of participation. Participants with odd numbers had non-blindfold block first, and the ones with even numbers had blindfold block first. The numbering and manipulation of the block presentation were kept unknown from the participants. The blindfold block contained two stimuli presentation, one was the medium spatial frequency (SF), and the other was the high SF. The reason why the low SF one was not included is that it worked as a control in the non-blindfold block, as there is minor to no effect of this stimulus (Braithwaite et al., 2013, 2015). The blindfold wearing was prior to the presentation of the stimuli; thus, participants wore the blindfold for five minutes before the blindfold block.&#13;
After the participants finished viewing each pattern, they would answer the 17 questions about the associated visual distortions. They were asked to read aloud the answers and the answers would be immediately recorded using a computer. There was no break in between each trial, and the participants would keep on viewing the next one once they finished all the questions. &#13;
In between the two stimuli present blocks, the participants were asked to finish the three questionnaires: MUSEQ (Mitchell et al., 2017), CAPS (Bell et al., 2006) and CHI II (Braithwaite et al., in press). It takes approximately 20 minutes to complete the three questionnaires. After the questionnaires are completed, the next block of stimuli was presented with a blindfold or no blindfold respectively. After both of the blocks and the three questionnaires were completed, the debrief sheet was given to participants at the end of the experiment. &#13;
The entire process took about 30 minutes for a native English speaker, for participants who speak English as their second language, the duration took slightly longer, at around 35 to 40 minutes.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1761">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1762">
                <text>data/SPSS.sav&#13;
data/.JASP</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1763">
                <text>Zuo2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1764">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1765">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1766">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1767">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1768">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1769">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1770">
                <text>Jason Braithwaite</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1771">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1772">
                <text>Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1773">
                <text>43 Participants (15 males and 28 females)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1774">
                <text>ANOVA&#13;
Bayesian Analysis&#13;
Correlation&#13;
t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="76" public="1" featured="0">
    <fileContainer>
      <file fileId="30">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4e3a3385ed408600eae4500b535495c8.pdf</src>
        <authentication>77939218cb4037e3126cc7d4f2cc61c7</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1775">
                <text>Cortical Hyper Excitability correlating with Visual Distortions and Hallucinations</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1776">
                <text>Nishtha Bakshi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1777">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1778">
                <text>The primary focus of our study concerned how abnormalities in visual experiences such as visual distortions or hallucinations result in increase in cortical hyper-excitability in the non-clinical population. Aberrant neural processes leads to anomalous experiences. Susceptibility to such visual distortions reflects elevated levels of cortical hyper excitability. On the account of methodology, Forty-eight non-clinical individuals completed the "Pattern Glare Task" where they viewed certain striped grating patterns with different spatial frequencies. The non-clinical participants also completed the Cortical Hyper-excitability Index (Chi) and the Cambridge Depersonalization Scale (CDS). From the analysis, Pattern glare task performance showed that individuals experienced more visual distortions in the Medium Frequency (3cpd). The CDS and Chi results only confirm our study further. Conclusively, the study suggests that members of the non-clinical population do experience a certain level of increase in cortical hyper-excitability. It establishes the utility of pattern glare with regards to CHi and CDS to add to our existing knowledge. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1779">
                <text>Introduction&#13;
The major objective of this study is to understand the relationship between the cortical hyper-excitability and the various visual hallucinations or distortions in the non-clinical population. The major research question is to understand how the aberrant neural processes lead to anomalous experiences. This section investigates into the methodology that has been used to investigate and validate hypotheses postulated by the research question of this project. The participants for this study are 48 non-clinical constituents. These participants were tested with the Pattern Glare Task, the Cortical Hyper-Excitability Index and the Cambridge Depersonalization Scale.&#13;
Participants&#13;
Forty-eight numbers of individuals, undergraduates and postgraduates ageing between 21 and 33 from were recruited for the experiment via random sampling. The mean age of the participants was 24. Out of these, 30 (62%) were male participants while 18 (38 %) were the number of female participants. None of the individuals reported any medical history of seizures, photo sensory epilepsy or were diagnosed with migraine. Individuals suffering from migraine, migraine (aura) or photosensitive epilepsy were excluded from the study. &#13;
Materials&#13;
Pattern Glare Test&#13;
The pattern glare task includes stripy patterns on three separate cards each with different spatial frequencies; low spatial frequency baseline grating (approx. 0.5 cycles per degree), high spatial frequency baseline grating (approx. 12 cpd), and the crucial medium spatial frequency grating (approx. 3 cpd). The computerised version of the pattern glare task was modified for this experiment, as we were using a paper-based version (Wilkins, 1995; Wilkins et al., 1984) for the same. The stimuli used in the experiment are given in Figure 1. The individuals are asked to stare at the white dot in the center of each pattern for approximately 10-15 seconds, while holding each pattern at arm's length. Following, a series of questions are asked to the participant i.e. if they experienced any blurring of lines, bending of lines, and fading, shimmering, flickering or shadowy shapes. The participants on the basis of their experience on viewing each pattern, rate the above questions from a score of 0-7 where, 0-minimum and 7-maximum (Wilkins et al., 1984; Conlon et al., 1999). The score is obtained for each pattern and the difference between Pattern 2 and Pattern 3 is recorded, which is called as the '3-12 difference'; in other words, the difference between high frequency and the medium frequency (3cpd – 12cpd). &#13;
&#13;
&#13;
 Cambridge Depersonalization Scale&#13;
The CDS is a self-reporting questionnaire and is used to measure the duration and frequency of any depersonalization symptoms that individual experiences in the time frame of the past six months (Sierra and Berrios, 1999). The CDS is an instrument containing 29 items. Each of the items in the scale are rated on the basis of Likert-scale both for frequency (0-4; where, 0=never, 1=rarely, 2=often, 3=very often, and 4=all the time) and duration based on its average on how much the experiences last (1-6; where 1=few seconds, 2=few minutes, 3=few hours, 4=about a day, 5=more than a day, and 6=more than a week). Its global score is the sum of all items (0-290). Sierra et al., (2005) established four well determined factors to dictate the different symptoms of depersonalization as single or underlying dimensions they were ‘Anomalous Body Experience’, ‘Emotional Numbing’, ‘Anomalous Subjective Recall’, and ‘Alienation from Surroundings.’ This questionnaire addresses the complexity of depersonalization and uncovers its symptoms, which can be directed towards distinct psychopathological domains. &#13;
Cortical Hyper excitability Index&#13;
The CHi was designed to provide an index that discovers the visual irritability, discomfort and the associated visual distortions that individual’s experience (Braithwaite, Merchant, Dewe and Takahashi, 2015). The above-mentioned experiences are well linked to the increase of cortical hyper excitability. A major advantage of the CHi’s design is that it unveils three broad factors which are (1) heightened visual sensitivity and discomfort, (2) negative aura-type visual aberrations, and (3) positive aura-type visual aberrations. The items present in the questionnaire picture a vast selection of visual experiences (sensitive to external sensory information for e.g. lights, patterns; certain environment is uncomfortable for the individual; dizziness/nausea; discomfort/ irritation from reading a certain font or style of writing etc) that have been previously reported through hallucinations based experimental studies on patients, control groups, non-clinical populations; aura and its underlying dimensions. The CHi uses a fine-grained 7-point Likert response scales, where in the test each question has two response scales i.e. frequency (1-7; where 1=not at all frequent and 7=very frequent) and intensity (1-7; where 1=not at all intense and 7=extremely intense). In terms of scoring, both the scales are summed to provide an overall CHi index for each question. However, a value of 1 is subtracted from each response on frequency and intensity, as the scale was transformed from 1-7 to a 0-6 Likert-scale. This was done for individuals who responded with 1 in every question would still have a score of 54. &#13;
Design and Procedure&#13;
All the participants were forwarded a brief explanation about the purpose of the study and how they can contribute to it. If the participants agree, later schedule a time for the voluntary study. The experiment was conducted in the Social Hub of the Graduate College, Lancaster University. The participants were seated comfortably on the right side of the researcher. The individuals were asked to read the Participant Information sheet carefully, later if they agree; they may sign their respective consent form. It was made clear to the participants that the confidentiality of their personal information will be ensured and that they could at any point (1) can ask questions during the experiment, (2) stop the experiment, if they are uncomfortable at any point during the conduction (3) participants have the right to withdraw themselves from the study with no further adverse consequences however, they need to inform the researcher about this via email. Participants were again asked if they suffered from any neurological disorder specially migraine, migraine (aura), or photo sensory epilepsy and if they had any severe incidences of alcohol and drug abuse. &#13;
The first phase of the experiment included the pattern glare task. Individuals were handed over with the first pattern with low frequency (LF) and were asked to stare at the white dot in the center of the pattern for 10-15 seconds. After this, they were asked to rate the questions based on their experience on a scale of 0-7 (0-minimun, 7-maximum). The questions included if they experienced any blurring of lines, bending of lines, shimmering or flickering, fading or if they could see any shadowy shapes. Before handing over the second pattern, it was made sure that the participant is comfortable with proceeding further with the experiment and is not experiencing any kind of visual stress. The same steps were repeated for both the other two patterns with medium frequency (MF) and high frequency (HF). &#13;
The order in which the participants viewed the patterns was randomized for each one. Individuals who are prone to pattern glare can be quantified for such a criterion based on their sum of distortions in 3cpd (MF) or as the difference between 3 and 12 cpd, also called the '3-12 cpd difference'. After a two-minute break, the second phase of the experiment included participants to answer 29 questions on the Cambridge Depersonalisation Scale, which are based on the frequency and duration of any 'strange or funny experiences' that they felt in the past six months. Lastly, the third phase, the second questionnaire was introduced to the participants, the Cortical Hyper Excitability Index. Similar to the patterns, the questionnaires presented to the participants were also randomised in order to obtain a variety in the responses of the participants. The total time taken to conduct the experiment was about 20 minutes or less. Post conduction, the individuals were thanked for their time and effort.  &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1780">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1781">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1782">
                <text>Bakshi2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1783">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1784">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1785">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1786">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1787">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1788">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1789">
                <text>Dr Jason J Braithwaite</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1790">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1791">
                <text>Clinical Psychology&#13;
Neuropsychological</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1792">
                <text>48 Participants (30 males and 18 females)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1793">
                <text>Correlation&#13;
Multiple Regression&#13;
ANOVA&#13;
Exploratory Factor Analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="77" public="1" featured="0">
    <fileContainer>
      <file fileId="32">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e46e8d20a4047d694e440d515b4cd3c7.pdf</src>
        <authentication>c117c44603181de41daef23e2c8092e5</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1794">
                <text>Infant Gesture and Parent Knowledge of Development</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1795">
                <text>Miranda Sidman </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1796">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1797">
                <text>Background: Before children can communicate verbally, they use gesture to tell us what they want. Our understanding of the importance of gesture in language development has expanded greatly over the past few decades. Furthermore, the methods used to measure gesture and language development have also progressed. Gesture and language assessment rely heavily on parent reports. It has been suggested that what parents know about development has also consequences for their child’s developmental outcomes. &#13;
&#13;
Aims: To validate the gesture section of the UK-CDI Words and Gestures (Alcock, Meints, &amp; Rowland, 2017). And to explore parent knowledge of language and gesture milestones &#13;
&#13;
Methods &amp; Procedure: Twenty-seven children and their parents participated in the first experiment. The parents completed the UK-CDI W&amp;G and the children participated in an in- person gesture validation task. Thirty parents with a child 8-18 months participated in the second experiment. They completed the UK-CDI W&amp;G as well as our new parent knowledge questionnaire. &#13;
&#13;
Results: In Experiment one, children’s score from the gesture task correlated significantly with parent-reported scores on the UK-CDI W&amp;G. In experiment two, parents were more accurate at ordering and estimating the age of language milestones than they were gesture milestones. &#13;
&#13;
Conclusions: The findings for experiment one provides more support and confidence for the UK-CDI W&amp;G as a language assessment tool. This will provide and benefit researchers and clinicians with a standardised tool and method for assessing language norms and delays. The findings for experiment two inform us that parents are not that knowledgeable within the developmental domain of gesture. This provides us with information on where parents need to be educated to benefit the developmental outcomes of their children. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1798">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1799">
                <text>Experiment 1  &#13;
&#13;
Method  &#13;
&#13;
In this experiment we attempted to validate the gesture section of the UK-CDI Words and Gestures questionnaire through responses to the questionnaire and with an in-person gesture task procedure.  &#13;
&#13;
Participants  &#13;
&#13;
Twenty-seven children and their parent participated in this study. Participants included 10 girls and 17 boys between eight and eighteen months (M= 12.5 months, SD= 2.3 months) who were recruited from the Lancaster University Babylab and through social media (e.g. Facebook). The parents who participated in this study were 26 mothers and one father. To be eligible for this study all participants had to be native British English speakers. All participants were self-selected and received a children’s book for participant payment.  &#13;
&#13;
Apparatus and Materials  &#13;
&#13;
UK-CDI Words and Gestures  &#13;
&#13;
The UK-CDI Words and Gestures (Alcock et al. 2013) is a parent-report questionnaire used to assess the language development of children aged eight to 18 months old. This questionnaire offers a checklist of words from several different categories (e.g., animals, toys, household items), with a total of 395 words. Parents are asked to indicate whether their child can say and understand, just understand, or does not know a word. The child obtains a score for total comprehension (sum of the words they understand) and a total score for production (sum of the words they say and understand). There is also a gesture section consisting of 57 gestures. The gesture section is divided into subsections (e.g., first communicative gestures, games, actions, pretending to be a parent, and imitating other adult actions). In the First Communicative Gesture section, parents are asked to indicate whether their child does a gesture often (two points), sometimes (one point), or not yet (zero points). For the remaining sections parents are asked to tick yes or no if their child does a gesture. A total gesture score is calculated by taking 0.5* the First Communicative Gesture section score and is then summed with the total number of Yes scores from the remaining sections. See Appendix D for full UK-CDI W&amp;G questionnaire.  &#13;
&#13;
Gesture Task  &#13;
&#13;
The gesture task used in this study was constructed by (Alcock et al. 2013) to establish content validity of the gesture scale on the UK-CDI W&amp;G. The gesture task consists of 10 gesture items taken from the gesture section of the UK-CDI Words and gestures. The items range from low frequency items (e.g., ‘can you give me a high five?’), medium frequency items (e.g., ‘can you put on a hat?’), and high frequency items, (e.g., ‘Can you feed the teddy/dolly?’). The stimuli were nine children’s toys required for the items on the gesture task. See Appendix B.  &#13;
&#13;
Procedure  &#13;
&#13;
Participants were asked to complete the UK-CDI Words and Gestures (Alcock et al. 2016) prior to the home visit. Participants were sent the UK-CDI Words and Gestures via an electronic link. Upon completion of the UK-CDI a home visit was scheduled and took place in each participants home. The task was administered by the researcher in a quiet room with the child and parent. Prior to the gesture task being administered, parents were pre-warned of the procedure and were told to not repeat instructions during the gesture task until cued by the researcher. Participants were asked each item first without any demonstration or cueing. If there was no response the researcher would demonstrate the gesture and say, ‘Can you show me the (x)?’. If there was still no response the parent was asked to demonstrate the gesture. (See Appendix B and C for gesture task procedure and list of stimuli). Each participant was recorded for approximately 45 minutes.  &#13;
&#13;
Scoring  &#13;
&#13;
For the gesture task, participants were scored for 30 minutes. Any time the participant was out of the cameras view or was not cooperating was not included in the video analysis. For each item on the gesture task participants scored two points for completing a gesture on their own, one point for completing a gesture after a demonstration, or zero points for not completing the gesture. Participants were also observed and scored for any spontaneous gestures exhibited during the scored time. Spontaneous gestures included any gestures exhibited by the participant that are on the UK-CDI W&amp;G questionnaire but weren’t on the gesture task. Spontaneous gestures observed during the home-visit were given a score of one if they did it or zero if they did not.  &#13;
&#13;
Inter-rater Reliability  &#13;
&#13;
Each video was scored twice by the researcher and scored a third time by another masters student at Lancaster University. The second scorer was briefed on the nature of the videos, the UK-CDI W&amp;G questionnaire, the gesture task, and was familiar with the content of the study. The agreement level was calculated using, Percent agreement= (agreements/ (agreements + disagreements)) x100. The two scorers reached an agreement level of 94%.  &#13;
&#13;
Experiment 2 Methods &#13;
&#13;
This experiment was investigating what parents know about language and gesture development using two online questionnaires. &#13;
&#13;
Participants &#13;
&#13;
Thirty parents with a child between the ages of eight and 18 months participated in this study. All participants who participated were mothers. Participants were recruited through the Lancaster University Babylab and through social media advertisements for the study. To be eligible for this study participants had to be native British English speakers. All participants who completed the study were entered in a draw to win a £20 Amazon gift voucher. &#13;
&#13;
Apparatus and Materials &#13;
&#13;
UK-CDI Words and Gestures &#13;
&#13;
The same version of the UK-CDI Words and Gestures (Alcock et al. 2016) was used in the second experiment. &#13;
&#13;
Parent Knowledge Questionnaire &#13;
&#13;
The researcher constructed a questionnaire to investigate what parents know about language and gesture development. The format of the questionnaire was based on a previous study investigating what mothers know about play and language development (Tamis- LeMonda et al. 1998). The questionnaire consisted of 11 language items and 11 gesture items. The researcher used a paired-comparisons procedure to match each item in the respective domain (language or gesture) with the remaining items, resulting in 55 pairs for language and 55 pairs for gesture. All pairs were randomized and presented in a left-right alignment. Participants were asked to select the item they believed to be more difficult and to occur at a later age. Following the paired-comparisons task, the same 11 language and gesture items were used on an age checklist and randomized. Participants were then asked to estimate the age each milestone emerged. See Appendix E for full questionnaire. &#13;
&#13;
Language and Gesture Scales &#13;
&#13;
The language and gesture items were chosen based on empirical findings about language and gesture development in the literature, and the previous work of Tamis-Lemonda et al. (1998). The language items gradually increased in sophistication from level one to level 11. Levels one through four represented prelinguistic communication from nondiscriminant cooing to requesting a target object. Level five through seven represented single-word utterances, from imitation to expressing possession. Levels eight to 11 represented multi-word utterances, from expressing concrete desires to then expressing memories and emotions. &#13;
&#13;
The gesture items were taken from the UK-CDI W&amp;G (Alcock et al. 2016) gesture section. Items were selected to ensure the full age range of eight to 18 months was represented. &#13;
&#13;
Procedure &#13;
&#13;
Participants were sent two links to complete the UK-CDI W&amp;G questionnaire and the Parent Knowledge questionnaire. For the UK-CDI W&amp;G questionnaire, participants were instructed to indicate whether their child could understand and say, just understand, or could not understand a word. Participants were also instructed to indicate if their child could complete a gesture or not. Upon completion of the UK-CDI W&amp;G participants were then instructed to complete the Parent Knowledge Questionnaire. &#13;
&#13;
The first task on the parent knowledge questionnaire included 11 language and 11 gesture items which rendered 55 paired comparisons (in each domain). Participants were asked to select the item in each pair they believed to be more difficult, that is, to occur later in development. Following the paired comparisons task, participants were then given the 11 language and 11 gesture items individually (and randomized) and were asked to estimate the &#13;
&#13;
age they believe each milestone first occurs. From these procedures, the researcher calculated the parents’ accuracy at judging the difficulty of language and gesture items by correlating their ordering of items with the empirical scales using Spearman rho. Four accuracy scores were calculated for each participant: those obtained from paired-comparisons tasks for language and gesture separately and two age estimation accuracy scores from the language and gesture age checklists. The researcher also calculated two discrepancy scores for each participant, one for language and one for gesture. Each score estimated how discrepant parents’ judgements of age onsets were; These values were computed by summing the absolute differences between parents age estimates and the empirical ages of onsets as stated in the literature. &#13;
&#13;
Ethics &#13;
&#13;
After reading information about the study, parents ticked a box to give their consent to participate in this study. Ethical approval for the study was obtained from the Lancaster University Research Ethics Committee. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1800">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1801">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1802">
                <text>Sidman2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1803">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1804">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1805">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1806">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1807">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1808">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1809">
                <text>Dr. Katie Alcock</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1810">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1811">
                <text>Clinical, Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1812">
                <text>Twenty-seven children</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1813">
                <text>Correlation, psychometrics, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="78" public="1" featured="0">
    <fileContainer>
      <file fileId="33">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d6a054cf59a1bcc256a999da72fc52d5.pdf</src>
        <authentication>3aee7166ee8d4897862d4104e49ff70c</authentication>
      </file>
      <file fileId="34">
        <src>https://www.johnntowse.com/LUSTRE/files/original/66186b47da176ed6756ec7ba414f2cef.pdf</src>
        <authentication>1f20290d62202afd81e98b9478272af1</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1814">
                <text>The Development of an Attentional Bias toward Body Size Stimuli: Performance on a &#13;
Novel Stroop Task</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1815">
                <text>Raegan Bridget Cecilia Whitehead</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1816">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1817">
                <text>Distorted perceptions of body size have been identified and well-documented in eating disordered (ED) and eating-restricted populations, however, less is known about the development of this distortion. Research has employed Stroop food- and body-word tasks to investigate attentional biases towards semantically-related words and found a significant Stroop effect to such stimuli in ED, and sub-clinical, cohorts. The Size Congruity Effect (SiCE) has confirmed the perception of inanimate object size, however such an effect has not yet been studied in regards to body size specifically. This study recruited a novel Stroop size task to measure the perception of conceptual body size versus physical object size in four developmental age groups (Child, Adolescent, Young Adult and Adult). The Body Satisfaction Questionnaire (BSQ-34) was also taken as a measure of body dissatisfaction in participants over the age of 18. Findings indicate that a significant attentional bias towards body size is present across all age groups, but is most prevalent in adolescent and young adult participants. These findings imply that cognitive interference towards body size stimuli is not only present in the typical population, but is also present in children from aged 7. Body dissatisfaction, measured using the BSQ-34, did not have a significant effect on Stroop interference scores, suggesting that dissatisfaction with one’s own body does not implicate perception of others body size. The findings contribute to the fields understanding of body size misperception throughout typical development, the results also infer that body size perception is special, and not processed in the same way as inanimate size.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1818">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1819">
                <text>Participants&#13;
Eighty-eight participants (N = 88) were recruited to participate in this research. The participants (35 males, 53 females) were aged between 7 and 59 years (Mage = 23.38, SDage = 14.34). Participants were divided into one of four groups, dependent on their chronological age.&#13;
Child group. Child participants (N = 24, 8 male and 16 female), aged between 7 and 11 years (Mage = 10.04, SDage = 1.23), were recruited from St Boniface RC Primary School, Salford. A minimum participation age of 7 years was enforced for this experiment as previous research has not identified a consistent Stroop effect with younger children (Comalli et al., 1962). Parental consent was obtained prior to the research and participant assent was obtained on the day of testing. Five participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. Five participants reported having a specific learning difficulty (SLD); three participants had dyslexia, one participant had dyspraxia, one participant had attention deficit disorder (ADD) and one participant had attention deficit hyperactivity disorder (ADHD). One participant was on the autism spectrum (ASD). All participants with additional needs were performing well in mainstream school and were therefore considered able to participate in this research. Twelve participants had white British or white Irish ethnicity. Three participants had white European ethnicity.  Three participants had black African ethnicity. Three participants had mixed or multiple ethnicities. One participant had Chinese Asian ethnicity. One participant had Irish traveller ethnicity. Two participants spoke English as a second language, however both were fluent English speakers. Each child received a reward sticker for their participation.&#13;
Adolescent group. Adolescent participants (N = 22, 9 males, 13 females), aged between 13 and 16 years (Mage = 14.73, SDage = 1.12), were recruited through opportunity sampling. Social media posts were used to advertise the study, as well as word of mouth. All participants were recruited from Greater Manchester. Parental consent was obtained prior to testing and participant assent was obtained on the day of testing. One participant was colour blind. Six participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. Four participants reported having a SLD; two participants had dyslexia, one participant had dyslexia and dyscalculia and one participant had dyslexia and ADHD. All participants with SLD’s were performing well in mainstream school and were therefore considered able to participate in the research. Eighteen participants had white British or white Irish ethnicity. Two participants had black African ethnicity. One participant had mixed or multiple ethnicities. One participant had British and Chinese ethnicity. &#13;
Young Adult group. Young Adult participants (N = 22, 7 male and 15 female), aged between 22 and 33 years (Mage = 25.86, SDage = 2.34), were recruited through opportunity sampling. The researcher utilised social media, approached classmates in Lancaster Univeristy’s Psychology Department and workplace colleagues to participate in the research. All participants were recruited from the North West of England. Each participant provided their informed consent prior to the research. Six participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. Two participants reported having a SLD; one participant had dyslexia and one participant had ADD. Both participants reported after testing that they were able to complete the task with no additional difficulty as a result of their SLD. Fifteen participants had white British ethnicity. Five participants had white European ethnicity. One participant had white American ethnicity. One participant had mixed or multiple ethnicities. Three participants spoke English as a second language, however both were fluent English speakers.&#13;
Adult group. Adult participants (N = 20, 11 male and 9 female), aged between 37 and 59 years (Mage = 45.75, SDage = 8.27), were recruited through opportunity sampling. Social media posts were used to advertise the study, as well as word of mouth. All participants were recruited from Greater Manchester. Each participant provided their informed consent prior to the research. Ten participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. One participant had dyslexia. This participant reported after testing that they were able to complete the task with no additional difficulty as a result of their SLD. Fourteen participants had white British ethnicity. Three participants had white European ethnicity. Two participants had black Caribbean ethnicity. One participant had mixed or multiple ethnicities.&#13;
Three participants were removed from the data sample due to a high number of errors. The responses from 85 participants were subsequently included in the data analyses. &#13;
&#13;
This study received ethical approval from Lancaster University’s ethics committee on&#13;
1st May 2018.&#13;
&#13;
Materials&#13;
Task. The novel Stroop task was created using Psychopy, an open source Python –based programme used to run psychological experiments (Peirce, 2007, 2009). In the task participants were presented with computer-generated images of female bodies. Each body was individually presented on the screen and remained there until the participant made their screen size selection. One hundred and eight images were presented in total; 54 in the congruent trial and the same 54 in the incongruent trial. Eighteen unique images were presented three times, each time the screen size of the image was varied in order to ensure all 18 images were presented in all 3 screen sizes. The 18 images consisted of three model types (See Figure 1) which were used to represent the polarities of body size (3 small body sizes, 3 large body sizes; see Figure 2). &#13;
Figure 2. An image to show the body ‘models’ used in the experiment. Row 1 Left – Right: Model 1, Model 2, Model 3, Model 4. Row 2 Left – Right: Model 5, Model 6, Model 7, Model 8.&#13;
&#13;
The first testing phase of the Stroop task consisted of the individual presentation of 54 stimuli, these stimuli were presented with congruent screen and body sizes; all stimuli presented with a small screen size (10 x 4cm, 11 x 4.4cm, 12 x 4.8cm) contained a small body size, all stimuli presented with a large screen size (21 x 8.4cm, 22 x 8.8cm, 23 x 9.2cm) also contained a large body size. The second testing phase of the Stroop task consisted of the individual presentation of the same 54 stimuli as the first phase. These stimuli were presented with incongruent screen and body sizes; all stimuli presented with small screen size contained large body size, all stimuli presented with large screen size contained a small body size. See Appendix A for screenshots of the Stroop task, demonstrating the congruent and incongruent presentation of the stimuli as described here. The order of stimulus presentation was pseudo-randomised within Psychopy, so that each individual image was presented only once per participant. Randomising the order of stimulus presentation assured that participants were not subjected to order effects (Shaughnessy, Zechmeister, &amp; Zechmeister, 2006).&#13;
Participants were instructed to determine the screen size of each stimuli and respond as quickly and accurately as possible using the keyboard keys indicated to them in the instruction phase. The relevant keyboard keys (A and L) were indicated with white stickers on the external keyboard. Key allocations (e.g. A = Small, L = Big) were also visible on screen throughout the task, see Appendix A for screenshots of the task. Participant response times were recorded within Psychopy and exported to Microsoft Excel. The task was presented on a Toshiba Satellite Pro laptop computer with a 15.6-inch HD non-reflective display with a 16:9 ratio and LED backlighting. &#13;
&#13;
Body stimuli. Eighteen images of computer generated semi-nude female bodies, ranging in body size and physical appearance, were used in the current study. These were created and donated to the researcher by Dr Martin Tovee, body size perception researcher, for the purpose of the current experiment. The bodies ranged in size from ‘emaciated’ to ‘overweight’, the variations in body size were visually distinguishable (see Figure 1), Eight ‘models’ were created, each with variations in physical appearance including hair colour and style, skin tone, facial features and eye colour (see Figure 2). All bodies were presented in a forward facing 0o pose, in order to eliminate visual preference or difficulties in comparing stimuli. Image size was manipulated as a factor of the experiment; to reflect ‘small’ screen size all images were presented at 10 x 4cm, 11 x 4.4cm and 12 x 4.8cm. To reflect ‘big’ screen size all images were presented at 21 x 8.4cm, 22 x 8.8cm and 23 x 9.2cm. These sizes were chosen as they created incremental differences in screen size that were visually distinguishable, as can be seen in Appendix B.&#13;
Figure 2. An image to show the body size increments in the stimuli provided by Dr. Tovee. Model 3 is used to illustrate the size increments. Row 1 Left – Right: Size 1, Size 2, Size 3, Size 4, Size 5. Row 2 Left – Right: Size 6, Size 7, Size 8, Size 9, Size 10. For the purpose of the current experiment, sizes 1, 3, 4, 7, 8 and 10 were used as body size stimuli as these bodies had the largest size variation when visually scrutinised.&#13;
&#13;
Questionnaires. All participants were required to complete a demographic questionnaire, see Appendix C. The parent/guardian of a participant under the age of 16 was required to complete this questionnaire on behalf of the participant. This questionnaire was used to ascertain factors which may affect a participants ability to successfully complete the Stroop task.&#13;
Participants over the age of 18 years were also required to complete the Body Shape Questionnaire (BSQ-34; Cooper, Taylor, Cooper &amp; Fairburn, 1987). The BSQ-34 is a 34-item scale which measures participants feelings toward their own weight and body shape (Taylor, 1987). For example; ‘Have you been afraid that you might become fat (or fatter)?’ and ‘Has seeing your reflection (e.g. in a mirror or shop window) made you feel bad about your shape?’. Each item of this scale is scored on a six-point Likert scale, ranging from 1 (never) to 6 (always). BSQ-34 scores are totalled using Likert scale points; a score less than 80 indicates no concern with shape, a score between 80 and 110 points indicates a mild concern with shape, a score between 111 and 140 indicates moderate concern with shape and a score of 140 and above indicates a marked concern with shape (Cooper et al., 1987). The BSQ-34 was originally intended for use with female participants; the authors have since approved changes to items 9, 12 and 25 for use with male participants, this version was provided for male participants in the current experiment. The BSQ-34 was not considered suitable for participants under the age of 18 due to the explicit mention of clinically salient stimuli. &#13;
The BSQ-34, as well as participants consent forms and demographic questionnaires, were provided to participants on Adobe Fill &amp; Sign using an Apple Ipad and touchscreen pen. All participants indicated daily or weekly use of a touchscreen and/or computer.&#13;
&#13;
Design&#13;
Variables. The dependent variable in this study was task response time, recorded by Psychopy in milliseconds. Mean response times (MeanRT) were calculated for the congruent and incongruent trials, per participant. An interference score (incongruent MeanRT minus congruent MeanRT) was also calculated for each participant. The dependent variables of MeanRT and Interference Score were both used in the current data analyses. The independent variables in the study were; AgeGroup and Congruency.&#13;
AgeGroup. This was a between subjects factor. Participants were placed into one of four age groups, based solely on their chronological age.&#13;
Congruency. This was a within subjects factor. All eighty-eight participants completed the same novel Stroop task, containing both congruent and incongruent trials. The order of trial presentation were randomised for each participant.&#13;
&#13;
Procedure&#13;
Three months prior to testing, the parents/guardians of children in years five and six of St Boniface RC Primary School, Salford were contacted and given the opportunity for their child to participate in this study. The children of parents/guardians who returned the consent form and completed questionnaire were able to participate. The research was also advertised, via social media and word of mouth, to potential participants. The parents/guardians of participants under the age of 16 years, and participants over 16 years, were provided with an information letter, consent form and demographic questionnaire (See Appendices C, D and E).. Those who responded with a complete consent form and questionnaire participated in the research.  &#13;
Participants were individually invited to complete the procedure in a small quiet room. All participants were seated at a desk in front of the testing laptop and an external keyboard, see Figure 3 for the testing set-up. Participant consent, and child assent, was ascertained once the participants were seated. All participants were encouraged to ask any questions they had and child participants were reminded that they could return to their class at any time, without providing a reason. Once the preliminary period was completed participants were then asked to complete the computerised Stroop task. &#13;
The task was visible on the screen prior to each participant entering the room. Participants were aided through the initial instruction screens of the task and encouraged to stop and ask questions at this stage. The researcher read all instructions to participants under the age of 16, and to any participant who requested that the instructions be read to them. The task then contained two practice trials, in order to ensure that participants understood their role in the task. All participants were able to complete the two test trials without difficulty and were therefore permitted to complete the rest of the task. The researcher left the room and waited nearby for all participants over the age of 16 years, the researcher remained in the testing room for younger participants. Participants were informed that they should only take a break when they reached an instruction screen as their times were being recorded on all testing screens.&#13;
Participants were asked to alert the researcher once they had completed all stages of the computer task and reached the end screen. Child participants were given an envelope containing a parental debrief and escorted back to their classroom. Young adult and adult participants were asked to complete the BSQ-34 (Cooper et al., 1987) using Abobe Fill &amp; Sign on an Apple Ipad. The BSQ-34 was provided after the task as Davison and Wright (2002) reported that this method reduced demand characteristics in a similar study. Upon completion of the testing period all participants were thanked for their time and provided with a debrief sheet as well as help and information pertaining to eating disorder or body anxiety concerns. Child participants were rewarded with a sticker for completing the task. Please see Appendix F for the participant debrief.&#13;
Each participants response times were recorded in an Excel document which was then encrypted and saved to the researchers password protected laptop. All data was also stored on an encrypted external hard-drive, this copy of the data will be securely destroyed upon completion of the data analyses. &#13;
Figure 3. A photograph to show the testing set-up used in the current study. Note, participants were encouraged to adjust their seat height to remain at a ninety-degree angle to the screen. The testing set-up was replicated for all eighty-eight participants to ensure continuity.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1820">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1821">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1822">
                <text>Whitehead2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1823">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1824">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1825">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1826">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1827">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1828">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1829">
                <text>Dr Michelle To</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1830">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1831">
                <text>Cognitive, Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1832">
                <text>Eighty-eight participants </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1833">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="79" public="1" featured="0">
    <fileContainer>
      <file fileId="35">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8f2f87e573c831b72cee2c8b8ba543dc.pdf</src>
        <authentication>f34ccfe7021afea913451a930716e424</authentication>
      </file>
      <file fileId="36">
        <src>https://www.johnntowse.com/LUSTRE/files/original/9fc3d7b08fbf5aba53f3d3f32bc10296.pdf</src>
        <authentication>1697756e4beef9e38469b4104adb6c7b</authentication>
      </file>
      <file fileId="37">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6e8fe4b7fd6c4b29c575e3b1249198eb.pdf</src>
        <authentication>f1ee4628271e3179323d196a01d03e3c</authentication>
      </file>
      <file fileId="77">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4c4162827312b2c2d00e7c64b9587ebd.csv</src>
        <authentication>ed6519051947a6e4b43340598a2c7bf9</authentication>
      </file>
      <file fileId="78">
        <src>https://www.johnntowse.com/LUSTRE/files/original/fe12af25b11cfb5f017a248c53c613e3.csv</src>
        <authentication>aadf65e48136716fdfc5f72bb3921dbe</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1834">
                <text>Investigating the Effects of Challenging Behaviour on the Sibling Relationship: Influenced by Behaviour Topography and Shaped by Attributions of Controllability?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1835">
                <text>Lauren Laverick-Brown</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1836">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1837">
                <text>Challenging behaviour (CB) displayed by individuals with an intellectual disability (ID) is consistently identified as a stressor on the relationship that they have with their typically developing (TD) sibling. Given the potentially damaging effects of CB on the quality of the sibling relationship and the wellbeing of the TD sibling, understanding the cognitions that underpin TD siblings’ emotional and behavioural responses to CB is essential to direct sibling-targeted psychoeducational interventions. This study considered whether siblings’ responses to CB vary according to behavioural topography. Further, the study considered whether any effects detected were shaped by attributions made by TD individuals regarding the controllability of their siblings’ CB. Thirty-eight siblings of individuals with ID, and 36 participants with a nondisabled sibling, completed a web-based questionnaire measuring participants’ positive and negative affect towards their sibling, the nature of their sibling’s CB, and controllability perceptions regarding their sibling’s CB. The results of this study reiterate that CB is a stressor on the sibling relationship, with externally directed CB (i.e. aggression, destruction) eliciting greater negative affect in siblings compared to internally directed behaviours (i.e. self-injury). However, it could not be concluded with an appropriate level of significance (i.e. p&lt;.05) that this was due to participants perceiving their siblings as more in control of their externally directed behaviours. These findings may have resulted from the diverse nature of the participant group. Further research is required to examine specific differences in the emotional impact of each type of challenging behaviour (and then subsequently, whether any differences detected arise due to contrasting perceptions of behaviour controllability).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1838">
                <text>Participants&#13;
Seventy-four TD individuals who had a sibling completed this study. Participants were allocated to one of two conditions according to whether they had a sibling with an ID, or did not (i.e. their sibling was TD). There were 38 participants who had a sibling with ID (82% female, Mage=27.32, SD=9.65) and 36 participants with a TD sibling (92% female, Mage=28.61, SD=10.81); ranging between 13 to 60 years old. Siblings diagnoses are reported in Table 1. &#13;
Table 1: Diagnoses of participants' siblings&#13;
&#13;
&#13;
Participants were recruited on a voluntary basis through social media advertisements posted by the researcher and by disability organisations (who also sent emails to their followers), and through word of mouth. The researcher developed a digital research flyer summarising study’s purpose and procedure, distributed as described above. To incentivise participation, participants had the option to enter themselves in a prize draw for a £20 Amazon voucher upon completion of the study. &#13;
A minimum participation age was determined after inputting the text of each questionnaire included in the study into Coh-Metrix (Version 3.0; Graesser, McNamara, Louwerse &amp; Cai, 2004): a web-based software tool assessing the cohesion and coherence of a text. Coh Metrix provides an index of readability by generating the reading age of a piece of text, and the reading age determined of the questionnaires was “grade six”; indicating that 10/11-year-olds should have the ability to comprehend and respond to questions. Thus, it was decided that the questionnaires were suitable for TD individuals aged 12 or above.&#13;
Consent was gained from all those over 16 years of age, and parental consent for and assent from those aged 12-15 years of age (see “Ethical Considerations” below for further information).&#13;
Design&#13;
The study was of a correlational design, investigating the relationship between the following continuous variables: the quality of the sibling relationship, CB displayed by the sibling with ID, attributions of controllability made by participants in respect to their sibling’s behaviour, and participants’ general relational abilities. &#13;
As part of further analysis, the intention was to examine whether there were effects of having a brother/sister with a disability, gender, and birth order (i.e. whether participants were older/younger than their sibling) (all between-subjects factors) on the sibling relationship. &#13;
Materials&#13;
During this study, four self-report questionnaires were administered to all participants: the Positive and Negative Affect Scale (PANAS) (Watson, Clark &amp; Tellegen, 1988) (Appendix A), the Behavioral Problems Inventory (Short Form) (BPI-S) (Rojahn, Matson, Lott, Esbensen, &amp; Smalls, 2001) (Appendix B), the Controllability Beliefs Scale (CBS) (Dagnan, Grant &amp; McDonnell, 2004) (Appendix C), and Social Competence and Close Friendship subscales taken from the Harter Self-Perception Profile for Adolescents (Harter, 2012) (Appendix D). The development and presentation of the questionnaires was done using online Qualtrics software (Qualtrics, Provo, UT). &#13;
The Harter Self-Perception Profile for Adolescents (Harter, 2012) is a multidimensional measure of how young people evaluate their scholastic, social, athletic, and job competencies, as well as physical appearance, romantic appeal, behavioural conduct, and close friendship. However, for the purposes of this study, only the subscales regarding social competence and close friendship were included to detect an individual’s general ability in forming and maintaining relationships with others, which might be a confounding influence on detecting the quality of the sibling relationship. Furthermore, the phrasing of the questionnaire was deemed suitable for both adult and adolescent participants.&#13;
The questions are presented as two clauses (e.g. "Some people know how to make others like them, but…”, and “Some individuals do not know how to make others like them”). Participants are able to select whether each clause is “really true for me” or “less true for me”, though are required to make the one selection out of four options across both clauses that is most self-descriptive. These responses are coded into a 4-point scale, with “1” representing poorer social/friendship abilities, and some items are negatively coded. Sufficient levels of validity and reliability of the Profile have been reported within a range of population groups (e.g. Donnellan, Trzesniewski &amp; Robins, 2015; Rose, Hands, &amp; Larkin, 2012).&#13;
A modified version of the PANAS (Watson et al. 1988) was used to assess participants’ feelings towards their brother/sister with a disability, which were then used to infer the quality of the sibling relationship i.e. greater positive affect would indicate a positive and fulfilling sibling relationship, whilst greater negative affect would indicate poor sibling relationship quality. The PANAS is a self-report questionnaire that consists of two separate scales containing emotion-based items that encapsulate positive and negative affect. Participants were asked to think about their sibling and whether they had felt each emotion towards them, rating this on a 5-point scale to specify how often they feel that emotion, ranging from 1 (very slightly or not at all) to 5 (extremely often). Higher total scores on each scale indicated greater positive/negative affect. “Total negative affect” and “total positive affect” scores were obtained for each participant; whereby higher scores pertain to greater affect.&#13;
The PANAS has been widely utilised to measure variation in affect, and previous research investigating its psychometric properties concludes it to have high reliability and validity across many populations (e.g. Merz, Malcarne, Roesch, Ko, Emerson, et al., 2013; Bakhshipour &amp; Dezhkam, 2006). In this study, certain items of the PANAS were adapted to ensure that they were recognisable to younger participants; for example, “hostile” and “strong” were changed to “angry” and “happy”, respectively. The items “jittery”, “active” and “determined” were excluded as the researcher did not view them as relevant to the sibling relationship. Nevertheless, statistical analysis revealed that internal consistency remained, with the positive and negative affect scales showing high reliability in the current sample, Cronbach’s αnegative=.87; Cronbach’s αpositive=.93.&#13;
The BPI-S (Rojahn et al. 2001) is a psychometrically sound behaviour rating instrument (Rojahn, Rowe, Sharber, Hastings, Matson, et al., 2012; Mascitelli, Rojahn Nicolaides, Moore, Hastings et al., 2015) constituting a series of items referencing examples of CB. When completing the BPI-S, respondents consider whether a specific individual (in this study, participants’ sibling) engages in a behaviour, and then rate its frequency on a 1-to-6-point scale; corresponding to responses ranging from “never” to “daily”. The original BPI-S also contains a severity-rating subscale; however, this was excluded from the study, as rating the severity of behaviour was deemed to be too complicated for younger participants to judge. &#13;
The BPI-S contains questions relating to three types of problem behaviours: self-injurious, stereotypic, and aggressive/destructive behaviours. For the purposes of this study, the behavioural items of the BPI-S were grouped and scored according to whether they constituted IDB (i.e. self-injury) or EDBs (i.e. aggression and destruction). Items referencing stereotyped behaviour were excluded, as it was not possible to neatly categorise them into IDB or EDB. As an addition to the questions of the BPI-S, an opportunity for “free text” was included immediately after, whereby participants could describe any behaviours of concern that were not specified by the questionnaire and rate their frequency. Total scores for the BPI-S were obtained, as well as separate total scores for IDB and EDB frequency, whereby higher scores represent a greater number of incidences of CBs.&#13;
Lastly, the CBS (Dagnan et al., 2004) is a 15-item measure designed to capture participants’ perceptions regarding an individual’s (in this case, their siblings’) control over their CB. Responses are scored on a 1-to 5-point scale, corresponding to ‘disagree strongly’, ‘disagree slightly’, ‘unsure’, ‘agree slightly’ and ‘agree strongly’. Ten items are worded such that agreement reflects participants attributing high control over behaviour (e.g. ‘They are trying to wind me up’). In contrast, five items are phrased whereby agreement indicates participants attributing low control over behaviour (e.g. ‘They don’t mean to upset people’); thus, these items are reversed scored. A “total CBS” score was calculated for each participant, with higher scores pertaining to perceptions of greater control over behaviour. Moreover, Dagnan et al. (2004) report good internal reliability, with a Cronbach's alpha of 0.89.&#13;
Demographic information relating to participants’ age, gender, birth order (i.e. were they older/younger than their sibling) and the diagnosis of their disabled sibling (if their brother/sister was disabled) was collected prior to participants completing the questionnaires.  &#13;
Procedure&#13;
After receiving expressions of interest from prospective participants and confirming they had a sibling (with or without an ID), the researcher issued a participant information sheet detailing the nature and aims of the study. Both groups of siblings followed the same study procedure but received participant information sheets that were relevant to their role in the study. The researcher also provided a weblink to the online consent form hosted by Qualtrics. Once participants completed the consent form, they answered demographic questions and generated a participant code to ensure anonymity of responses. Participants were informed prior to the study commencing that they could withdraw at any time, either by closing the webpage or by contacting the researcher and asking to be removed from the dataset.&#13;
Initially, participants responded to items of the Close Friendship and Social Competence subscales of the Harter Self Perception Profile. Following completion of these questions, participants then completed the PANAS, BPI-S and CBS (in that order). Upon finishing the CBS, participants who had a sibling with ID proceeded to a debrief form that outlined the study in detail and provided contact information for support organisations (if needed following discussion of their encounters with CB). Control participants received a debrief form detailing their role in determining the baseline/typical sibling relationship.&#13;
The procedure differed slightly for participants aged under 16 years old. With one exception, who contacted the researcher directly (but ultimately could not participate due to lack of parental consent), this group expressed their desire to participate through their parents contacting the researcher. In response, the researcher sent a consent form for a parent/guardian to complete, giving their permission for their child to participate in the study. Two participant information sheets were also provided; one for parents and another simplified version of the adult participant information sheet for individuals under 16 years old. Once the researcher had received the completed consent form, the weblink to the study was emailed. It was stressed to parents that, though they may wish to support their child in understanding the questions of the study, they should refrain from guiding their child’s answers.&#13;
After clicking the weblink, younger participants completed an assent form and were informed about the participation withdrawal procedures, if required. The presentation of the questionnaires was the same as for those aged 16 years old and above. However, the debrief form was simplified in its language and content to ensure it was accessible to younger participants. Contact information for organisations who could support this group of participants specifically was also provided. Additionally, younger participants with a non-disabled sibling received a simplified version of the adult participant debrief form relevant to their role in the study. After reading the debrief sheet, all participants were given the opportunity to enter into a prize draw for a £20 Amazon voucher. The study lasted roughly 15-20 minutes. &#13;
All participant information sheets, consent forms and debrief sheets are listed in Appendices E – S. &#13;
&#13;
&#13;
Ethical Considerations&#13;
This study was reviewed and approved by the Psychology Department Research Ethics Committee at Lancaster University.&#13;
The topic of this study revolved around participants’ experiences of CB, which could involve reflection upon sensitive experiences (including those of violence and destructive behaviour) that elicit negative psychological reactions (such emotional upset, worry, stress, and shame). Furthermore, the minimum age specified for participants is 12 years old, so some participants recruited would be minors (i.e. a vulnerable participant group).&#13;
In case the discussion of CB experiences elicited negative psychological reactions in participants, contact information for sources of wellbeing support was given as part of the study debrief for both young and adult participants (e.g., talking to a trusted family member or a teacher; information and contact details for free services such as Childline, the Samaritans, The CB Foundation etc.). Offering access to support services was particularly important to younger participants, who may not feel able to speak to their parents about any issues they have.&#13;
Furthermore, consent was required from all participants over the age of 16. If a participant indicated being under the age of 16, consent was sought from a parent/guardian, whilst assent was obtained from all 12-to-15-year-old participants. Consent and assent were monitored throughout the study. All participants were given sufficient opportunity to understand the nature, aims and expected outcomes of research participation. Complex technical information was suitably adapted so that participants aged under 16 years old could give consent to the extent that their capabilities allowed. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1839">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1840">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1841">
                <text>LaverickBrown2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1842">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1843">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1844">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1845">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1846">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1847">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1848">
                <text>Chris Walton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1849">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1850">
                <text>Clinical</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1851">
                <text>Seventy-four typically developing individuals</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1852">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="80" public="1" featured="0">
    <fileContainer>
      <file fileId="38">
        <src>https://www.johnntowse.com/LUSTRE/files/original/cdeccb8763d386dc1f1f9f5c6d7e1f84.pdf</src>
        <authentication>82841499e425774f7414de8d9c851ef6</authentication>
      </file>
    </fileContainer>
    <collection collectionId="4">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="183">
                  <text>Focus group</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="184">
                  <text>Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1853">
                <text>An exploration of how young adults engage with charities</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1854">
                <text>Saday Lakhani</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1855">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1856">
                <text>Research exploring how individuals choose to engage with charities has been limited to studies and interpretations from the 20th Century. In addition to this, research into how young adults choose to interact with charities has not been explored frequently. The present study aims to tackle both of these issues by exploring how young adults choose to interact with charities. Using Sargeant’s (1990) donor decision model as a base, this investigation explores what motivates and deters potential donors from engaging with charity and exploring how they choose to engage. It was found that income was a major barrier towards donation and that the role of others was an important motivator. Lastly participants registered that social media is a prevalent part of how people choose to interact with charities, however donation and volunteering are more valued. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1857">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1858">
                <text>Participants &#13;
This investigation consisted of 15 participants based in Lancaster between the ages of 19-25 years, all of which studied at Lancaster University. The sample consisted of eight male participants with an age range of 19-25 and seven female participants with an age range of 19-22. Participants were recruited via opportunistic methods on social media. Advertisement for participation was published on various social media platforms relevant to the University. Each recruited participant was asked to invite a friend to their focus group discussion. Participants were provided with refreshments as an incentive for participation. Due to the method of online recruitment, it was assumed that all of the participants were frequent users of social media and therefore understood its utility. Participants were not filtered for their donation history as it was assumed that individuals would have donated at some point in the past. &#13;
Procedure &#13;
Each focus group consisted of up to four participants which, as a result of the recruitment method, ensured that each group would be consist of two pairs who were not familiar with each other. The intention of this conflicting paired discussion was to encourage &#13;
a more open and honest discussion. As well as this, the design of having a paired discussion ensures that statements made by an individual can be verified or rejected by the paired member as they are familiar with the activities of the speaker. As such, the paired member can act as a moderator for the contributions. The focus groups were segmented by gender. One group consisted of all male participants, another consisted of all female participants. The remaining two groups were mixed gender groups. The purpose behind this segmentation was to explore if there was a difference in responses between male and female participants. &#13;
The focus group discussions took place in a quiet and comfortable room within Lancaster University to encourage a free-flowing discussion without interruption. Upon arrival, each participant was provided with a participant information sheet to read, and a consent form to complete outlining the nature of the study and the confidentiality of the data recorded. After any questions were addressed the discussions began and were audio recorded. &#13;
The topics for discussion centred on the areas of exploration mentioned above. The discussion was structured (see Appendix C for Discussion Guide) but was open allowing the discussion to migrate to a number of areas that were pertinent to the participants. The researcher terminated the discussion upon satisfaction that participants had nothing further to add. Participants were then provided with debrief sheets outlining the purpose of the study and its aims. &#13;
Each focus group discussion was transcribed onto a word document and subsequently added to NVivo 12 for qualitative analysis. &#13;
Analysis &#13;
The transcripts from each group were exported for analysis to NVivo 12 qualitative analysis software (QSR International Pty Ltd. Version 12, 2018). These were then analysed using the framework for thematic analysis derived from Braun and Clarke (2006). Transcripts &#13;
were read multiple times to ensure familiarity with the content of the discussions. Areas of the discussion that were deemed interesting were subsequently coded within the software according to both the semantic and latent quality. These codes were informed by pre-existing psychological literature in addition to code generation in vivo. This data was then organised into several themes from which conclusions could be generated. These themes were re- analysed to ensure that they were an accurate and valid representation of the content of the discussions. The final themes were then solidified. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1859">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1860">
                <text>Text/Word.docx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1861">
                <text>Lakhani2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1862">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1863">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1864">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1865">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1866">
                <text>Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1867">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1868">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1869">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1870">
                <text>Marketing, Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1871">
                <text>15 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1872">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="81" public="1" featured="0">
    <fileContainer>
      <file fileId="39">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4dd9543e110a7e4ce23d67ad7dc07aff.pdf</src>
        <authentication>40c67288eea36432d7427dbc94d64dac</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1873">
                <text>A Match Made in Heaven? The Effect of Congruency Between Accent and Promoted&#13;
Product in Radio Adverts</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1874">
                <text>Samantha Trow</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1875">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1876">
                <text>Research consistently shows that accents are powerful social cues used in our&#13;
everyday interactions as well as in advertisements; they can change how we perceive&#13;
others and potentially also associated products or brands. Recent studies have&#13;
explored the effect of congruency between the accent of the speaker in adverts and the&#13;
country-of-origin of the advertised products. Yet the findings from research on the&#13;
congruency effect is mixed and sparse. Therefore, this study investigated further into&#13;
the effect of congruency. Participants were randomly assigned to one of four&#13;
experimental conditions. The study employed a 2 (Accent: Northern English vs.&#13;
Italian) x 2 (Product: fish and chips vs. pizza) between participant design. In doing&#13;
this, two adverts had a congruent accent-product pair (e.g., Northern English speaker&#13;
advertising a fish and chips brand) and two ads were accent-product incongruent (e.g.,&#13;
Northern English speaker advertising a pizza brand). After listening to the ads,&#13;
participants then completed a questionnaire which measured participants’ brand&#13;
memory, attention to the ad, purchase intentions, perceived similarity to the speaker &#13;
and evaluations of the brand, advert and speaker. The results showed no congruency&#13;
effect, however other striking findings were revealed that will be discussed in this&#13;
paper. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1877">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1878">
                <text>This study used a 2 (Accent: Northern English vs. Italian) x 2 (Product: fish and chips&#13;
vs. pizza) between subject design. The dependant variables were participants’&#13;
attention to the ad, memorability of the advertised brand name, purchase intentions,&#13;
evaluations of the speaker, and attitude towards the ad and brand. Additionally, the&#13;
evaluations of the speaker included their perceived warmth, competence, sociointellectual status, aesthetic qualities, and dynamism traits.&#13;
Participants&#13;
Through opportunity sampling, 82 participants were recruited. This sample&#13;
comprised of 29 males and 53 females. Participants were randomly assigned to one of&#13;
the four conditions. The participants’ age ranged from 19 to 65 (Mage = 25.5 years,&#13;
SDage = 10.8). All but one participant were native speakers of English.&#13;
Materials&#13;
Radio Advertisements. For this experiment, four radio adverts were created&#13;
(see Appendix A). Two ads were accent-product congruent (Italian accent and pizza,&#13;
Northern English accent and fish and chips) and two ads were accent-product&#13;
incongruent (Italian accent and fish and chips, Northern English accent and pizza). In&#13;
order to create the adverts, two male speakers were recruited. One of the speakers&#13;
spoke with an authentic Northern English accent and one of the speakers spoke with&#13;
an authentic Italian accent, both spoke at similar paces with no major differences in&#13;
their tone of voice.&#13;
Questionnaire. The questionnaire used in the experiment was created via the&#13;
survey software, Qualtrics. The questionnaire took approximately 10 minutes to&#13;
complete. The items and scales used to measure the dependent variables are discussed&#13;
below.&#13;
Brand attitude. Participants’ attitude towards the advertised brand was&#13;
measured using a 4-item, 7-point bipolar scale used in Liu, Wen, Wei. and Zhao’s&#13;
(2013) study (ɑ = .92). See Appendix B for the full subscale.&#13;
Ad attitude. Participants’ attitude towards the advert subscale was taken from&#13;
Lalwani, Lwin, and Li’s (2005) study. The participants were asked to rate the radio&#13;
advert using 4-items with 7-point bipolar scales (ɑ = .87). See Appendix C.&#13;
Attention to the ad. Also taken from Lalwani et al.’s (2005) study, were 3-&#13;
items with 7-point likert scales to measure participants’ attention to the ad (ɑ = .24).&#13;
The Cronbach’s alpha score was low however removing items did not increase the&#13;
alpha significantly to represent a robust measure. See Appendix D.&#13;
Purchase intentions. In addition, based on the scales used in Hornikx, van&#13;
Meurs, and Hof’s (2013) research, the questionnaire included 3-items with 7-point&#13;
bipolar scales to measure participants’ purchase intentions (ɑ = .88). See Appendix E.&#13;
Competence and warmth. The questionnaire included questions which&#13;
measured the perceived competence and warmth of the speaker. The 9-items for&#13;
competence (ɑ = .90) and 9-items for warmth (ɑ = .92) were presented together. The&#13;
scale used for the items were 7-point likert scales (1= Strongly Disagree, 7= Strongly&#13;
Agree), taken from Rudman and Glick’s (1999) study. The list of items used can be&#13;
found in Appendix F and G, respectively.&#13;
Socio-intellectual status, aestheticism and dynamism. Also, the questionnaire&#13;
included the Speech Dialect Attitudinal Scale by Mulac (1975, 1976). This consisted &#13;
of 12-items (four items for each subscale) with 7-point bipolar scales measuring the&#13;
participants’ perceived socio-intellectual status (ɑ = .85), aestheticism (ɑ = .85), and&#13;
dynamism of the speaker (ɑ = .76). See Appendix H.&#13;
Similarity. To measure participants’ perceived similarity to the speaker in the&#13;
ad, the questionnaire included 3-items with 7-point likert scales (ɑ = .80) taken from&#13;
Lalwani et al.’s (2005) questionnaire. See Appendix I.&#13;
Manipulation check. The questionnaire examined if participants correctly&#13;
identified the accent used by the speaker in the ad. Participants were asked “What was&#13;
the accent of the speaker in the ad?”.&#13;
Memorability of the brand name. At the end of the questionnaire the&#13;
participants were asked the open-ended question “Please write down the product’s&#13;
brand name that was advertised in the radio ad you listened to.”.&#13;
Additional questions. The questionnaire included additional questions which&#13;
investigated whether any factor other than accent affected participants’ responses.&#13;
These questions consisted of 7-point bipolar scales, 7-point likert scales, unipolar&#13;
scales, and open-ended questions (see Appendix J). The questions measured the&#13;
comprehensibility of the speaker in the ad, participants’ attitudes towards the accent,&#13;
accent of the participant, likability of the advertised products, hunger, and native&#13;
language of the participant. The questionnaire also asked demographic questions.&#13;
Procedure&#13;
After giving the informed consent, participants were randomly assigned to an&#13;
experimental condition and sent the link to the Qualtrics questionnaire. At the&#13;
beginning of the questionnaire the radio ad was played followed by the questions. The&#13;
order in which the items were presented were brand attitude, ad attitude, attention to&#13;
ad, purchase intentions, warmth, and competence, socio-intellectual status of speaker, &#13;
aestheticism of speaker, dynamism of speaker, similarity to speaker,&#13;
comprehensibility of speaker, accent of the speaker, attitude towards the ad, accent of&#13;
the participant, likeability of the advertised product, frequency of eating advertised&#13;
product, hunger of participant, participants’ first language, brand name memorability,&#13;
and finally followed by the demographic questions. On completion of the&#13;
questionnaire, participants were thanked and debriefed.&#13;
Analysis&#13;
A multivariate ANOVA was used to test the main and interaction effects of&#13;
accent and product on participants’ evaluations. Also, separate univariate ANOVAs&#13;
were conducted to explore if there were any covariate effects on participants attention&#13;
to the ad, brand memorability, evaluations of brand, ad or speaker. The covariate&#13;
variables were participants’ perceived similarity to the speaker, comprehensibility of&#13;
the speaker, participants’ attitude towards the speaker’s accent, hunger, frequency of,&#13;
and likability of eating the advertised product. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1879">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1880">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1881">
                <text>Trow2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1882">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1883">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1884">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1885">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1886">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1887">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1888">
                <text>Dr Tamara Rakić</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1889">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1890">
                <text>Advertising, Marketing, Cognitive Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1891">
                <text>82 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1892">
                <text>MANOVA, ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="82" public="1" featured="0">
    <fileContainer>
      <file fileId="40">
        <src>https://www.johnntowse.com/LUSTRE/files/original/30c348dadb095597a7d9679478f43a12.doc</src>
        <authentication>ef312b9c3444f21c8304146da60d1295</authentication>
      </file>
    </fileContainer>
    <collection collectionId="8">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="191">
                  <text>Ratings</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="192">
                  <text>Studies where participants make a series of ratings or judgements when presented with stimuli</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1893">
                <text>Interacting in a Virtual Environment, the role of visual perception, the human hand and the recognition of rescaling.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1894">
                <text>Connor Yates</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1895">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1896">
                <text>A common assumption from perception research is that we can estimate the size of the environment by using our own hand as a perceptual metric by comparing the size of our hand to the environment. Further research aimed to explore this effect by manipulating the size of the hand to see if it could accurately estimate the size of objects and found that even when the hand was magnified or minimized people perceived their hand to stay around the same size. The effect that the hand is perceived as a constant size is called the hand-size constancy effect and the current research has aimed to expand on previous research by examining if hand-size constancy still occurs even when hand size increases whilst in the presence of the participant. This research was done using a new method which eliminates more demand characteristics than previous hand-size constancy research. Participants took part in a virtual scenario using virtual reality in which each time a participant attempted the task, their hand or non-corporeal hand gradually increased in size, until a total of 38% size increase. Results from this research found that participants did recognise their hand size increase in the non-corporeal condition and did not notice hand size change in the real hand condition. These results support previous research by finding that hand size constancy can still occur even when eliminating demand characteristics that may have occurred in previous research using a more discrete method.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1897">
                <text>Visual Perception&#13;
 Rescaling effects&#13;
 Virtual Reality&#13;
 Hand-size Constancy&#13;
Body size effects.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1898">
                <text>Participants.&#13;
	The participants were 30 typically developing adults between the ages of 19 and 50 (N = 30, 12 male and 18 female, M = 24.39 years old, SD = 7.76 years). Participants were mainly recruited from a major university in the North West of England using posters on the university campus and online advertisements. Participants received £5 in return for their participation.&#13;
Materials&#13;
	The current research used the Oculus Rift with leap motion to detect hand movement. The experiment was created using the Unity game engine software to create a programme called the virtual bowling alley. The virtual bowling alley was created to mimic a real table top bowling alley in which all the items in the game were created for this experiment including the bowling ball, the pins and the virtual hand used in the experiment.   &#13;
	Two questionnaires were essential to this study, “The Embodiment Questionnaire” and “The Virtual Presence Questionnaire”.  The Embodiment questionnaire was an adaptation from Sanchez-Vives’ research which explored visual hand illusions (Sanchez-Vives, Spanlang, Frisoli, Bergamasco, &amp; Slater, 2010). The embodiment questionnaire was used to test the extent of different variables the participant may exhibit whilst within virtual reality. Variables related to ownership of limbs in virtual reality e.g. “I sometimes felt as if my hand was located where my virtual hand was to be” the illusion of movement which looked at how much your virtual arm impacted on the movement of your real arm, validity which looked at how your movements impacted on your virtual arm and control regarding how much control you had of your virtual arm. The Embodiment Questionnaire uses a 7-point Likert scale in which you rate how much you agree with the statement (Appendix 1). The other questionnaire required for this study was “The Virtual Presence Questionnaire” which was an adapted version from Usoh’s paper looking at presence questionnaires (Usho, Catena, Arman, &amp; Slater, 2000). The virtual presence questionnaire was used to examine how much the participant rated their immersion within the virtual scenario. Rating of virtual immersion was done through questions which examined whether their sense of immersion whilst being within the virtual scenario was stronger than their sense of place in their actual location within the virtual reality lab. For example, questions like “To what extent were there times the virtual bowling scenario was reality for you” was used to examine immersion and presence within virtual reality. The virtual presence questionnaire also used a 7-point Likert scale (1 = disagree, 7 = agree) (Appendix 2).&#13;
	Other materials required were a calculator to count the amount of bowling pins knocked down each attempt and to total the amount of bowling pins knocked down per participant. All the appropriate ethical documentation was also required (Information sheet, Virtual Reality health and safety sheet, consent document and debriefing sheet). &#13;
Procedure&#13;
	After the 30 participants required for the study were obtained, the participants were asked to sign a digital calendar in which they selected which day they were free to take part in the study with the promise of a 30-minute experiment and £5 reward for taking part. Participants were advised to arrive to the lab 10 minutes prior to the study and when they did arrive they were greeted by the researcher at the door of the lab. After a short introduction, the participant was then sat down at a table with some documents and writing equipment.&#13;
 The participant was asked to look at the study information sheet first, this sheet contained contextual knowledge about the study regarding the task that they would get involved in. After the participant stated that they understood all the information on the information sheet then they were given the ethics consent form to sign. The ethics consent form contained all the participants ethical rights (right to withdraw, anonymisation of the data etc…), the participants were advised to carefully read through the sheet to make sure they understood their ethic rights and asked to sign their name, age and date on the ethic sheet. Also, on a separate piece of paper, the researcher noted the participants participant number which was used to code the data anonymously. When the participant completed the ethics consent form, they were told that the experiment would now begin.&#13;
 The participant was escorted to a different desk with a computer set up on the desk. Noticeably, the computer was set up in a way in which the chair was at a set distance from the oculus rift sensor to allow for full immersion. The participant was sat down on the chair that was at the set distance away from the computer, in which the computer was set to the home screen and the researcher assisted the participant in putting on the oculus rift head mounted display (HMD), the HMD had a hand sensor attached to the front of it to detect hand movement. When the participant was sat down, they were asked to confirm that they were comfortable with the HMD on and that they could see clearly. When the participant gave consent, they were told that they were going to enter the virtual bowling alley now, the virtual bowling alley was an in-house created virtual scene used for this experiment. The virtual bowling alley was created in the unity engine using C++ to create virtual objects such as the pins and ensuring they had an interaction engine script attached to them to give them physics. The virtual bowling alley was a table top bowling simulation, created with the intention that there would be a lot of hand exposure during the experiment as the participants would have to use their hands to push the ball and knock over the pins &#13;
Participants were assigned to 1 of 2 groups; the hand group or the non-corporeal hand group. The group the participant was assigned to impacted on what type of hands they would have during the virtual bowling scenario, for example, when entering the virtual bowling alley in the hand group your hands would be regular virtual hands that are created to mimic real hands. (Figure 2). Participants who were assigned to group 2 (the non-corporeal hand group) when entering the virtual bowling alley, they would see blocks in place of their hands, these block hands were created to replace their hands in virtual reality with objects that could complete the same tasks that a hand could, but did not represent the hand in any way, a non-corporeal hand.&#13;
&#13;
When participants entered the virtual bowling scenario and confirmed that they were calibrated to the bowling scenario (their visual view point was correct, and they could move their hands around accurately) then they were told they had 20 attempts to knock down as many pins as they could, with 10 pins an attempt this means there was a total of 200 pins. Each time an attempt was completed by the participant, the experiment would press a key on the keyboard which reset the pins and the bowling ball for the participant. Each time the virtual bowling alley attempt was reset the participants hand (group 1) or cubes (group 2) increased in size by approximately 2% per bowling task attempt until they completed their 20 attempts in which their hand/ non-corporeal hand would have increased in size by 38%. Also, it is worth mentioning that each time the bowling ball attempt was completed, and the alley was reset the bowling ball would randomly change from bigger to smaller sizes (10 different sizes per experiment between 50% increase in size and 50% decrease in size, twice per size). The changes in the ball size were required so that participants did not use the bowling ball as a reference of scale to compare to their change of size in hands or cube hands (non-corporeal hands). &#13;
	When the participant completed the 20 attempts of the bowling task, the virtual bowling programme would automatically exit, and the participant was asked to take off the HMD and escorted back to their first seat which was the table they completed their consent form. The experimenter also made note on a separate sheet of the participants total bowling pins knocked down out of 200. When the participant was sat down at the table the experimenter would then hand the participant a sheet with 2 questions on it. Question 1: “Did you detect any changes whilst in the virtual environment?” this is a yes or no response. After the participant answered question 1 they were then asked question 2 “If hand size was manipulated would you estimate your hand changed in size or not?”. The response for question 2 would also be a yes or no response, it is worth noting that if the participant did respond with “yes” to question 2, then the researcher asked them if they estimated if hand size increased or decreased in which the experimenter would ask the participant to note this response underneath question 2.  After they answered the 2 questions regarding the virtual bowling alley the participant would then be handed 2 more documents both being questionnaires. The participant would be asked to firstly fill out the virtual presence questionnaire and then the virtual embodiment questionnaires, they were also told if they had any questions regarding the questionnaires they could ask at any time. After the participant confirmed that they were happy with their responses to the questions and completed all the questions then the experimenter passed the participant a debrief sheet which gave more context to the experiment and was very explicit about the participants hand changing in size over time. The participant was asked if they had any questions regarding the experiment, if they did the researcher happily answered them, if not, then the researcher would thank the participant for their time. &#13;
	When all the results were collected from the 30 participants, the data was stored on a locked private computer in which only the experimenters had access to. All documents regarding the experiment were also locked in a storage cabinet which was under lock and key. The independent variable in this study was hand type (hands vs non-corporal hands) and the dependent variable in this study was the response to the questions regarding the virtual bowling scenario (question 1 and 2). Due to the nature of the dependent variables data a Chi-Square was used as nominal data was collected on 2 independent groups. Other data regarding age, gender, handedness, virtual presence scores and virtual embodiment scores were also analysed using independent t-tests.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1899">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1900">
                <text>data/SPSS.sav&#13;
data/csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1901">
                <text> Yates2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1902">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1903">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1904">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1905">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1906">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1907">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1908">
                <text>Dr Sally Linkenauger</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1909">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1910">
                <text>Cognitive, Perception Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1911">
                <text>30 Participants (12 male and 18 female)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1912">
                <text>Chi-squared&#13;
t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="83" public="1" featured="0">
    <fileContainer>
      <file fileId="41">
        <src>https://www.johnntowse.com/LUSTRE/files/original/70e8b6f0e20b7e3f46e642c7284bd8a8.doc</src>
        <authentication>6d2e0f9e5936d11253c9ab16b9bc1842</authentication>
      </file>
    </fileContainer>
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1913">
                <text>Experiencing social acceptance and rejection through ‘likes’ and ‘dislikes’: Does sleep quality affect the processing of social rewards?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1914">
                <text>Abigail Taylor-Spencer</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1915">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1916">
                <text>In adolescence, high importance is placed on peer evaluations and social rewards have increased salience during this developmental period. Sleep patterns also change in adolescence, as teenagers typically experience insufficient sleep. This research measured the pupil dilation of forty-four adolescents aged 16 to 18 using two tasks (audio and visual) to investigate whether sleep duration influenced the way social acceptance and rejection were processed. Sleep duration scores were obtained using the measure of sleep debt; this was calculated by subtracting sleep duration during the week from sleep duration at the weekend, plus weekday bedtime. It was expected that higher sleep debt would be linked to increased pupil reactivity towards social feedback and that there would be a greater pupil dilation in response to social rejection compared to social acceptance. In the visual task, it was found that sleep debt affected males and females differently when processing social rewards, as females with high sleep debt showed increased pupil dilation towards positive feedback compared to negative feedback, whereas males with low sleep debt showed a larger dilation towards positive feedback than females. It was also found that females with lower sleep debt gave more likes than dislikes when rating photos. This implies that sleep duration affects the social feedback adolescents provide. When a male voice was used in the audio task, more pupillary reactivity towards social acceptance was observed, however when a female voice was used, pupils dilated more in response to social rejection. Future research should further investigate these gender differences.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1917">
                <text>Adolescence&#13;
 Pupil dilation&#13;
Social feedback&#13;
 Reward&#13;
 Rejection&#13;
Sleep debt.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1918">
                <text>Participants&#13;
	Forty-four participants (N=44) were recruited from Haslingden High School and Sixth Form to participate in this research. The participants (35 female, 9 male) were all between the ages of 16 and 18 (Mage = 16.98, SDage = .63). Students in Psychology, Sociology and English classes were given the opportunity to participate in the research and contacted the researcher via email if they wished to participate. Each participant provided their informed consent before beginning the study.&#13;
Materials&#13;
	Photo ratings. Firstly, the participants were shown a PowerPoint containing 40 photos, which had been previously collected by the researcher, and featured adolescents which the participants did not know. Each photo was displayed individually for four seconds, meaning that the presentation lasted two minutes and forty seconds in total. Participants were provided with a sheet of paper on which they had an option to tick either ‘like’ or ‘dislike’ for each photo on the PowerPoint (see Appendix A). The total number of likes was calculated for each participant.&#13;
	Eye tracker. An eye-tribe desktop eye tracker with a 30Hz sampling rate was used to measure the pupil dilation of the participants in response to stimuli on two tasks - a visual task and an audio task. A chin rest was used to ensure the participants kept their heads still.&#13;
	Visual task. The visual task involved showing the participants the same 40 photos which they had previously been shown in the photo rating task, however, each photo had either a ‘like’ symbol or ‘dislike’ symbol (see Figure 1) in the bottom right hand corner. Participants were informed prior to beginning the task that if a photo contained the ‘like’ symbol, it meant that the individual in the photo had liked the participant’s picture, however the ‘dislike’ symbol meant that the individual in the photo had disliked the participant’s picture. The presentation of photos was randomised across participants&#13;
&#13;
Audio task. The audio task involved the participants listening to forty voice recordings, which each lasted between six and seven seconds in length. Twenty of these recordings were nice comments and twenty were nasty comments, which were found on online social media platforms. An example of a nice comment is; ‘You look unreal and your outfit is amazing. You are a true inspiration to everyone’ and an example of a nasty comment is; ‘You are so fake, and you are such a liar. Every single thing you say is a lie’ (see Appendix B for the complete list of comments). A male voice read out half of the nice and half of the nasty comments, and a female voice featured in the other half of the recordings. The nice comments were characterised as positive social feedback, and the nasty as negative social feedback. The presentation of nice and nasty comments was randomised across participants. The audio material was rated for emotional valence and arousal; the former being how positive or negative the recordings were, and the latter being the intensity of this positivity or negativity (Citron, Gray, Critchley, Weekes, &amp; Ferstl, 2014). See Appendix C for the emotional valence and arousal scores, which were rated by six individuals using Qualtrics. Presentation of the nice and nasty comments was randomised across participants.&#13;
	Questionnaires. Participants were asked to complete two questionnaires; one which was an adaptation of the MCTQ questionnaire (Munich ChronoType Questionnaire; Roenneberg, Wirz-Justice &amp; Merrow, 2003), to identify the sleeping patterns of the participants (see Appendix D), and a questionnaire about their social media use (see Appendix E) which was used to maintain the ruse that the study was interested in the participants’ social media use.&#13;
	This study received ethical approval from Lancaster University on 05/04/2018.&#13;
Design&#13;
	Variables. The dependent variable in this study was pupil size, which was measured in arbitrary units, using an eye tribe eye tracker. An average pupil diameter was calculated for each trial; each participant had 40 average pupil size measurements in the visual task and 40 average pupil diameter measurements in the audio task. The dependent variables of median and area under the curve were used. The independent variables in the study were; feedback valence, sleep debt, gender voice and gender.&#13;
	Feedback valence. The feedback was within subjects, as all forty-four participants experienced both positive and negative feedback in both tasks. In the visual task, all participants saw twenty people who had supposedly ‘liked’ their photo, and twenty people who had supposedly ‘disliked’ their photo. In the auditory task, all participants heard twenty positive comments and twenty negative comments. This was analysed to assess whether varying pupillary responses were elicited towards positive and negative social feedback.&#13;
	Sleep debt. Sleep debt was determined by the MCTQ (Roenneberg et al., 2003); a value of sleep debt was calculated by subtracting sleep duration during the week from sleep duration at the weekend, plus weekday bedtime. Participants were split into two groups; high sleep debt and low sleep debt. Those with a high sleep debt had less weekday sleep and greater weekend sleep, which is a marker of poor sleep quality. This was a between subject factor, as half of the participants were in the high sleep debt group, and half in the low sleep debt group.&#13;
	Voice Gender. In the audio task, half of the audio clips featured a male voice, and half featured a female voice, therefore this was a between subject factor. This was analysed to investigate whether the gender of the voice or pictured individual had an effect on the pupillary responses.&#13;
	Gender. In the visual task, the gender of the participants was investigated as a between subjects factor, as nine of the participants were male, and thirty-five were female.&#13;
	Audio task. The design of the audio task was a factorial design with a between subjects factor of sleep debt (which had two levels – low and high) and a within subjects factor of social feedback valence (two levels: positive and negative) and a second within subjects factor of voice gender (two levels: male and female).&#13;
	Visual task. The design of the visual task was a factorial design with a between subjects factor of sleep debt (which had two levels – low and high) and within subjects factors of social feedback valence (two levels: positive and negative) and participant gender (two levels: male and female).&#13;
Procedure&#13;
	Approximately two weeks prior to the beginning of data collection, students in Psychology, Sociology and English classes at Haslingden Sixth Form were contacted and given the opportunity to participate in this research. Those who were interested in participating, and would provide consent, were asked to send a picture containing only themselves (eg. a Facebook profile picture) to the researcher via email for use in the study. The participants were informed that the photo they sent would be liked or disliked by students at another school, and that that during the study, there would be an opportunity to like or dislike photos of the individuals who rated their picture. No other information about the other ‘students’ was provided. The participants were led to believe that the study was investigating whether social media use affects responses to being judged online, and whether the use of social media affects sleep patterns in adolescence.&#13;
	All participants were tested in the same office in Haslingden High School and Sixth Form. Participants were invited into the office and invited to sit down a desk which featured an eye-tribe eye tracker, 24-inch iMac monitor and keyboard, and a chin rest was placed 50 cm away from the eye tracker. The computer had MatLab 2015 installed. Each participant was provided with an information sheet (see Appendix F), and was given the opportunity to ask any questions, before signing an informed consent form (see Appendix G) if they still wished to participate and consented to partake in the study.&#13;
	Once the consent form had been signed, the photo rating task was explained. This task involved presenting forty photos to the participants using Microsoft PowerPoint. The photos were shown individually; each photo was on an individual slide, and each one was presented for four seconds. The participants were asked to mark whether they ‘liked’ or ‘disliked’ each photo on a sheet of paper (see Appendix A). The presentation was on an automatic timer however, the participants were informed that if a slide moved on too quickly, the left arrow key would take them back to the previous slide, and the timed presentation would continue by pressing the right arrow key. The participants were led to believe that the photographs they were rating were of the individuals who had rated their photos. The eye tracker was not used during this task.&#13;
	Next, the participants were asked to place their head on the chin rest, and the eye tracker was calibrated. Participants were asked to keep their heads as still as possible, and to move their eyes towards the dots as they appeared on the screen. The calibration was accepted when three stars or above was achieved, and the eye tracker was used for both the visual and auditory tasks. The order in which the tasks were completed was counterbalanced, therefore half of the participants completed the visual task first, and half completed the auditory task first. The participants were informed what would happen during each task and were given the opportunity to ask any questions before the tasks began.&#13;
	The participants were told that, in the auditory task, they would hear forty voice clips; twenty nasty and twenty nice. They were asked to look at a black cross that was located in the centre of the screen whilst the voice clips were playing. Ten of the ‘nice’ clips and ten of the ‘nasty’ clips were read aloud by a female, and the remaining were read by a male voice. The nice and nasty comments which featured in the voice clips were found on online social media platforms (see Appendix B for the full list of comments used), however the participants were asked to imagine that the comments had been directed towards themselves. Participants were told that, in the visual task, they would view the photographs which they had previously ‘liked’ or ‘disliked’ in the photo rating task. However, this time, the photos would either have a ‘like’ thumb or a ‘dislike’ thumb in the bottom right hand corner (see Figure 2 and Figure 3 for examples). If a photo had a ‘like’ thumb, it meant that person had supposedly liked the participant’s photo, whereas a ‘dislike’ thumb meant the individual in the photo had disliked the participant’s photo. Half of the participants completed the visual task first, and half of the participants completed the audio task first; the tasks were counterbalanced to determine whether the order in which they were presented influenced the outcome.&#13;
After finishing both the visual and auditory tasks, participants were asked to complete two questionnaires; the MCTQ (Roenneberg et al., 2003) to determine a sleep debt score and a questionnaire on social media use. After completing the questionnaires, participants were informed that their photo had not actually been seen or rated by pupils at another school, and that the ratings which they gave in the photo rating task wouldn’t be seen by the individuals in the photos. Participants were then provided with a debrief sheet (see Appendix H) and given the opportunity to ask any questions they may have had.&#13;
Analysis&#13;
Preliminary data analysis. In order to measure the magnitude of change in pupil dilation and compare across the conditions, each trial pupil size was baseline adjusted by subtraction of the mean pupil size in the 300ms prior to stimulus onset from each sampled value during the further 4 seconds of stimuli presentation. The area under the curve and median were then calculated from the trial level baseline adjusted data to provide the dependent variables in the analysis. These were used as dependent variables to show the magnitude and duration of the effects. The median was used as opposed to the mean because the median is less likely to be skewed by outliers.&#13;
Two multilevel mixed effects general linear mixed models (GLMM) were used to analyse the data for the two tasks with participant included as a random effect with intercept. An AR(1) heterogeneous first order autoregressive structure with homogenous variances was selected because it was expected that the error variance would become less correlated as the trials became further apart. The total number of likes each participant gave on the photo rating task was calculated and a 2 (gender: male vs. female) x 2 (sleep debt: low vs. high) between factor analysis of variance (ANOVA) was carried out.&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1919">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1920">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1921">
                <text>Taylor-Spencer2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1922">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1923">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1924">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1925">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1926">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1927">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1928">
                <text>Judith Lunn</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1929">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1930">
                <text>Cognitive Psychology&#13;
Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1931">
                <text>44 Participants (9 male and 35 female)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1932">
                <text>ANOVA&#13;
Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
