<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=11" accessDate="2026-05-03T08:37:12+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>11</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="81" public="1" featured="0">
    <fileContainer>
      <file fileId="39">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4dd9543e110a7e4ce23d67ad7dc07aff.pdf</src>
        <authentication>40c67288eea36432d7427dbc94d64dac</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1873">
                <text>A Match Made in Heaven? The Effect of Congruency Between Accent and Promoted&#13;
Product in Radio Adverts</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1874">
                <text>Samantha Trow</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1875">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1876">
                <text>Research consistently shows that accents are powerful social cues used in our&#13;
everyday interactions as well as in advertisements; they can change how we perceive&#13;
others and potentially also associated products or brands. Recent studies have&#13;
explored the effect of congruency between the accent of the speaker in adverts and the&#13;
country-of-origin of the advertised products. Yet the findings from research on the&#13;
congruency effect is mixed and sparse. Therefore, this study investigated further into&#13;
the effect of congruency. Participants were randomly assigned to one of four&#13;
experimental conditions. The study employed a 2 (Accent: Northern English vs.&#13;
Italian) x 2 (Product: fish and chips vs. pizza) between participant design. In doing&#13;
this, two adverts had a congruent accent-product pair (e.g., Northern English speaker&#13;
advertising a fish and chips brand) and two ads were accent-product incongruent (e.g.,&#13;
Northern English speaker advertising a pizza brand). After listening to the ads,&#13;
participants then completed a questionnaire which measured participants’ brand&#13;
memory, attention to the ad, purchase intentions, perceived similarity to the speaker &#13;
and evaluations of the brand, advert and speaker. The results showed no congruency&#13;
effect, however other striking findings were revealed that will be discussed in this&#13;
paper. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1877">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1878">
                <text>This study used a 2 (Accent: Northern English vs. Italian) x 2 (Product: fish and chips&#13;
vs. pizza) between subject design. The dependant variables were participants’&#13;
attention to the ad, memorability of the advertised brand name, purchase intentions,&#13;
evaluations of the speaker, and attitude towards the ad and brand. Additionally, the&#13;
evaluations of the speaker included their perceived warmth, competence, sociointellectual status, aesthetic qualities, and dynamism traits.&#13;
Participants&#13;
Through opportunity sampling, 82 participants were recruited. This sample&#13;
comprised of 29 males and 53 females. Participants were randomly assigned to one of&#13;
the four conditions. The participants’ age ranged from 19 to 65 (Mage = 25.5 years,&#13;
SDage = 10.8). All but one participant were native speakers of English.&#13;
Materials&#13;
Radio Advertisements. For this experiment, four radio adverts were created&#13;
(see Appendix A). Two ads were accent-product congruent (Italian accent and pizza,&#13;
Northern English accent and fish and chips) and two ads were accent-product&#13;
incongruent (Italian accent and fish and chips, Northern English accent and pizza). In&#13;
order to create the adverts, two male speakers were recruited. One of the speakers&#13;
spoke with an authentic Northern English accent and one of the speakers spoke with&#13;
an authentic Italian accent, both spoke at similar paces with no major differences in&#13;
their tone of voice.&#13;
Questionnaire. The questionnaire used in the experiment was created via the&#13;
survey software, Qualtrics. The questionnaire took approximately 10 minutes to&#13;
complete. The items and scales used to measure the dependent variables are discussed&#13;
below.&#13;
Brand attitude. Participants’ attitude towards the advertised brand was&#13;
measured using a 4-item, 7-point bipolar scale used in Liu, Wen, Wei. and Zhao’s&#13;
(2013) study (ɑ = .92). See Appendix B for the full subscale.&#13;
Ad attitude. Participants’ attitude towards the advert subscale was taken from&#13;
Lalwani, Lwin, and Li’s (2005) study. The participants were asked to rate the radio&#13;
advert using 4-items with 7-point bipolar scales (ɑ = .87). See Appendix C.&#13;
Attention to the ad. Also taken from Lalwani et al.’s (2005) study, were 3-&#13;
items with 7-point likert scales to measure participants’ attention to the ad (ɑ = .24).&#13;
The Cronbach’s alpha score was low however removing items did not increase the&#13;
alpha significantly to represent a robust measure. See Appendix D.&#13;
Purchase intentions. In addition, based on the scales used in Hornikx, van&#13;
Meurs, and Hof’s (2013) research, the questionnaire included 3-items with 7-point&#13;
bipolar scales to measure participants’ purchase intentions (ɑ = .88). See Appendix E.&#13;
Competence and warmth. The questionnaire included questions which&#13;
measured the perceived competence and warmth of the speaker. The 9-items for&#13;
competence (ɑ = .90) and 9-items for warmth (ɑ = .92) were presented together. The&#13;
scale used for the items were 7-point likert scales (1= Strongly Disagree, 7= Strongly&#13;
Agree), taken from Rudman and Glick’s (1999) study. The list of items used can be&#13;
found in Appendix F and G, respectively.&#13;
Socio-intellectual status, aestheticism and dynamism. Also, the questionnaire&#13;
included the Speech Dialect Attitudinal Scale by Mulac (1975, 1976). This consisted &#13;
of 12-items (four items for each subscale) with 7-point bipolar scales measuring the&#13;
participants’ perceived socio-intellectual status (ɑ = .85), aestheticism (ɑ = .85), and&#13;
dynamism of the speaker (ɑ = .76). See Appendix H.&#13;
Similarity. To measure participants’ perceived similarity to the speaker in the&#13;
ad, the questionnaire included 3-items with 7-point likert scales (ɑ = .80) taken from&#13;
Lalwani et al.’s (2005) questionnaire. See Appendix I.&#13;
Manipulation check. The questionnaire examined if participants correctly&#13;
identified the accent used by the speaker in the ad. Participants were asked “What was&#13;
the accent of the speaker in the ad?”.&#13;
Memorability of the brand name. At the end of the questionnaire the&#13;
participants were asked the open-ended question “Please write down the product’s&#13;
brand name that was advertised in the radio ad you listened to.”.&#13;
Additional questions. The questionnaire included additional questions which&#13;
investigated whether any factor other than accent affected participants’ responses.&#13;
These questions consisted of 7-point bipolar scales, 7-point likert scales, unipolar&#13;
scales, and open-ended questions (see Appendix J). The questions measured the&#13;
comprehensibility of the speaker in the ad, participants’ attitudes towards the accent,&#13;
accent of the participant, likability of the advertised products, hunger, and native&#13;
language of the participant. The questionnaire also asked demographic questions.&#13;
Procedure&#13;
After giving the informed consent, participants were randomly assigned to an&#13;
experimental condition and sent the link to the Qualtrics questionnaire. At the&#13;
beginning of the questionnaire the radio ad was played followed by the questions. The&#13;
order in which the items were presented were brand attitude, ad attitude, attention to&#13;
ad, purchase intentions, warmth, and competence, socio-intellectual status of speaker, &#13;
aestheticism of speaker, dynamism of speaker, similarity to speaker,&#13;
comprehensibility of speaker, accent of the speaker, attitude towards the ad, accent of&#13;
the participant, likeability of the advertised product, frequency of eating advertised&#13;
product, hunger of participant, participants’ first language, brand name memorability,&#13;
and finally followed by the demographic questions. On completion of the&#13;
questionnaire, participants were thanked and debriefed.&#13;
Analysis&#13;
A multivariate ANOVA was used to test the main and interaction effects of&#13;
accent and product on participants’ evaluations. Also, separate univariate ANOVAs&#13;
were conducted to explore if there were any covariate effects on participants attention&#13;
to the ad, brand memorability, evaluations of brand, ad or speaker. The covariate&#13;
variables were participants’ perceived similarity to the speaker, comprehensibility of&#13;
the speaker, participants’ attitude towards the speaker’s accent, hunger, frequency of,&#13;
and likability of eating the advertised product. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1879">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1880">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1881">
                <text>Trow2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1882">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1883">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1884">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1885">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1886">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1887">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1888">
                <text>Dr Tamara Rakić</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1889">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1890">
                <text>Advertising, Marketing, Cognitive Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1891">
                <text>82 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1892">
                <text>MANOVA, ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="80" public="1" featured="0">
    <fileContainer>
      <file fileId="38">
        <src>https://www.johnntowse.com/LUSTRE/files/original/cdeccb8763d386dc1f1f9f5c6d7e1f84.pdf</src>
        <authentication>82841499e425774f7414de8d9c851ef6</authentication>
      </file>
    </fileContainer>
    <collection collectionId="4">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="183">
                  <text>Focus group</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="184">
                  <text>Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1853">
                <text>An exploration of how young adults engage with charities</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1854">
                <text>Saday Lakhani</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1855">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1856">
                <text>Research exploring how individuals choose to engage with charities has been limited to studies and interpretations from the 20th Century. In addition to this, research into how young adults choose to interact with charities has not been explored frequently. The present study aims to tackle both of these issues by exploring how young adults choose to interact with charities. Using Sargeant’s (1990) donor decision model as a base, this investigation explores what motivates and deters potential donors from engaging with charity and exploring how they choose to engage. It was found that income was a major barrier towards donation and that the role of others was an important motivator. Lastly participants registered that social media is a prevalent part of how people choose to interact with charities, however donation and volunteering are more valued. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1857">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1858">
                <text>Participants &#13;
This investigation consisted of 15 participants based in Lancaster between the ages of 19-25 years, all of which studied at Lancaster University. The sample consisted of eight male participants with an age range of 19-25 and seven female participants with an age range of 19-22. Participants were recruited via opportunistic methods on social media. Advertisement for participation was published on various social media platforms relevant to the University. Each recruited participant was asked to invite a friend to their focus group discussion. Participants were provided with refreshments as an incentive for participation. Due to the method of online recruitment, it was assumed that all of the participants were frequent users of social media and therefore understood its utility. Participants were not filtered for their donation history as it was assumed that individuals would have donated at some point in the past. &#13;
Procedure &#13;
Each focus group consisted of up to four participants which, as a result of the recruitment method, ensured that each group would be consist of two pairs who were not familiar with each other. The intention of this conflicting paired discussion was to encourage &#13;
a more open and honest discussion. As well as this, the design of having a paired discussion ensures that statements made by an individual can be verified or rejected by the paired member as they are familiar with the activities of the speaker. As such, the paired member can act as a moderator for the contributions. The focus groups were segmented by gender. One group consisted of all male participants, another consisted of all female participants. The remaining two groups were mixed gender groups. The purpose behind this segmentation was to explore if there was a difference in responses between male and female participants. &#13;
The focus group discussions took place in a quiet and comfortable room within Lancaster University to encourage a free-flowing discussion without interruption. Upon arrival, each participant was provided with a participant information sheet to read, and a consent form to complete outlining the nature of the study and the confidentiality of the data recorded. After any questions were addressed the discussions began and were audio recorded. &#13;
The topics for discussion centred on the areas of exploration mentioned above. The discussion was structured (see Appendix C for Discussion Guide) but was open allowing the discussion to migrate to a number of areas that were pertinent to the participants. The researcher terminated the discussion upon satisfaction that participants had nothing further to add. Participants were then provided with debrief sheets outlining the purpose of the study and its aims. &#13;
Each focus group discussion was transcribed onto a word document and subsequently added to NVivo 12 for qualitative analysis. &#13;
Analysis &#13;
The transcripts from each group were exported for analysis to NVivo 12 qualitative analysis software (QSR International Pty Ltd. Version 12, 2018). These were then analysed using the framework for thematic analysis derived from Braun and Clarke (2006). Transcripts &#13;
were read multiple times to ensure familiarity with the content of the discussions. Areas of the discussion that were deemed interesting were subsequently coded within the software according to both the semantic and latent quality. These codes were informed by pre-existing psychological literature in addition to code generation in vivo. This data was then organised into several themes from which conclusions could be generated. These themes were re- analysed to ensure that they were an accurate and valid representation of the content of the discussions. The final themes were then solidified. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1859">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1860">
                <text>Text/Word.docx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1861">
                <text>Lakhani2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1862">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1863">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1864">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1865">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1866">
                <text>Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1867">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1868">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1869">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1870">
                <text>Marketing, Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1871">
                <text>15 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1872">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="79" public="1" featured="0">
    <fileContainer>
      <file fileId="35">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8f2f87e573c831b72cee2c8b8ba543dc.pdf</src>
        <authentication>f34ccfe7021afea913451a930716e424</authentication>
      </file>
      <file fileId="36">
        <src>https://www.johnntowse.com/LUSTRE/files/original/9fc3d7b08fbf5aba53f3d3f32bc10296.pdf</src>
        <authentication>1697756e4beef9e38469b4104adb6c7b</authentication>
      </file>
      <file fileId="37">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6e8fe4b7fd6c4b29c575e3b1249198eb.pdf</src>
        <authentication>f1ee4628271e3179323d196a01d03e3c</authentication>
      </file>
      <file fileId="77">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4c4162827312b2c2d00e7c64b9587ebd.csv</src>
        <authentication>ed6519051947a6e4b43340598a2c7bf9</authentication>
      </file>
      <file fileId="78">
        <src>https://www.johnntowse.com/LUSTRE/files/original/fe12af25b11cfb5f017a248c53c613e3.csv</src>
        <authentication>aadf65e48136716fdfc5f72bb3921dbe</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1834">
                <text>Investigating the Effects of Challenging Behaviour on the Sibling Relationship: Influenced by Behaviour Topography and Shaped by Attributions of Controllability?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1835">
                <text>Lauren Laverick-Brown</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1836">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1837">
                <text>Challenging behaviour (CB) displayed by individuals with an intellectual disability (ID) is consistently identified as a stressor on the relationship that they have with their typically developing (TD) sibling. Given the potentially damaging effects of CB on the quality of the sibling relationship and the wellbeing of the TD sibling, understanding the cognitions that underpin TD siblings’ emotional and behavioural responses to CB is essential to direct sibling-targeted psychoeducational interventions. This study considered whether siblings’ responses to CB vary according to behavioural topography. Further, the study considered whether any effects detected were shaped by attributions made by TD individuals regarding the controllability of their siblings’ CB. Thirty-eight siblings of individuals with ID, and 36 participants with a nondisabled sibling, completed a web-based questionnaire measuring participants’ positive and negative affect towards their sibling, the nature of their sibling’s CB, and controllability perceptions regarding their sibling’s CB. The results of this study reiterate that CB is a stressor on the sibling relationship, with externally directed CB (i.e. aggression, destruction) eliciting greater negative affect in siblings compared to internally directed behaviours (i.e. self-injury). However, it could not be concluded with an appropriate level of significance (i.e. p&lt;.05) that this was due to participants perceiving their siblings as more in control of their externally directed behaviours. These findings may have resulted from the diverse nature of the participant group. Further research is required to examine specific differences in the emotional impact of each type of challenging behaviour (and then subsequently, whether any differences detected arise due to contrasting perceptions of behaviour controllability).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1838">
                <text>Participants&#13;
Seventy-four TD individuals who had a sibling completed this study. Participants were allocated to one of two conditions according to whether they had a sibling with an ID, or did not (i.e. their sibling was TD). There were 38 participants who had a sibling with ID (82% female, Mage=27.32, SD=9.65) and 36 participants with a TD sibling (92% female, Mage=28.61, SD=10.81); ranging between 13 to 60 years old. Siblings diagnoses are reported in Table 1. &#13;
Table 1: Diagnoses of participants' siblings&#13;
&#13;
&#13;
Participants were recruited on a voluntary basis through social media advertisements posted by the researcher and by disability organisations (who also sent emails to their followers), and through word of mouth. The researcher developed a digital research flyer summarising study’s purpose and procedure, distributed as described above. To incentivise participation, participants had the option to enter themselves in a prize draw for a £20 Amazon voucher upon completion of the study. &#13;
A minimum participation age was determined after inputting the text of each questionnaire included in the study into Coh-Metrix (Version 3.0; Graesser, McNamara, Louwerse &amp; Cai, 2004): a web-based software tool assessing the cohesion and coherence of a text. Coh Metrix provides an index of readability by generating the reading age of a piece of text, and the reading age determined of the questionnaires was “grade six”; indicating that 10/11-year-olds should have the ability to comprehend and respond to questions. Thus, it was decided that the questionnaires were suitable for TD individuals aged 12 or above.&#13;
Consent was gained from all those over 16 years of age, and parental consent for and assent from those aged 12-15 years of age (see “Ethical Considerations” below for further information).&#13;
Design&#13;
The study was of a correlational design, investigating the relationship between the following continuous variables: the quality of the sibling relationship, CB displayed by the sibling with ID, attributions of controllability made by participants in respect to their sibling’s behaviour, and participants’ general relational abilities. &#13;
As part of further analysis, the intention was to examine whether there were effects of having a brother/sister with a disability, gender, and birth order (i.e. whether participants were older/younger than their sibling) (all between-subjects factors) on the sibling relationship. &#13;
Materials&#13;
During this study, four self-report questionnaires were administered to all participants: the Positive and Negative Affect Scale (PANAS) (Watson, Clark &amp; Tellegen, 1988) (Appendix A), the Behavioral Problems Inventory (Short Form) (BPI-S) (Rojahn, Matson, Lott, Esbensen, &amp; Smalls, 2001) (Appendix B), the Controllability Beliefs Scale (CBS) (Dagnan, Grant &amp; McDonnell, 2004) (Appendix C), and Social Competence and Close Friendship subscales taken from the Harter Self-Perception Profile for Adolescents (Harter, 2012) (Appendix D). The development and presentation of the questionnaires was done using online Qualtrics software (Qualtrics, Provo, UT). &#13;
The Harter Self-Perception Profile for Adolescents (Harter, 2012) is a multidimensional measure of how young people evaluate their scholastic, social, athletic, and job competencies, as well as physical appearance, romantic appeal, behavioural conduct, and close friendship. However, for the purposes of this study, only the subscales regarding social competence and close friendship were included to detect an individual’s general ability in forming and maintaining relationships with others, which might be a confounding influence on detecting the quality of the sibling relationship. Furthermore, the phrasing of the questionnaire was deemed suitable for both adult and adolescent participants.&#13;
The questions are presented as two clauses (e.g. "Some people know how to make others like them, but…”, and “Some individuals do not know how to make others like them”). Participants are able to select whether each clause is “really true for me” or “less true for me”, though are required to make the one selection out of four options across both clauses that is most self-descriptive. These responses are coded into a 4-point scale, with “1” representing poorer social/friendship abilities, and some items are negatively coded. Sufficient levels of validity and reliability of the Profile have been reported within a range of population groups (e.g. Donnellan, Trzesniewski &amp; Robins, 2015; Rose, Hands, &amp; Larkin, 2012).&#13;
A modified version of the PANAS (Watson et al. 1988) was used to assess participants’ feelings towards their brother/sister with a disability, which were then used to infer the quality of the sibling relationship i.e. greater positive affect would indicate a positive and fulfilling sibling relationship, whilst greater negative affect would indicate poor sibling relationship quality. The PANAS is a self-report questionnaire that consists of two separate scales containing emotion-based items that encapsulate positive and negative affect. Participants were asked to think about their sibling and whether they had felt each emotion towards them, rating this on a 5-point scale to specify how often they feel that emotion, ranging from 1 (very slightly or not at all) to 5 (extremely often). Higher total scores on each scale indicated greater positive/negative affect. “Total negative affect” and “total positive affect” scores were obtained for each participant; whereby higher scores pertain to greater affect.&#13;
The PANAS has been widely utilised to measure variation in affect, and previous research investigating its psychometric properties concludes it to have high reliability and validity across many populations (e.g. Merz, Malcarne, Roesch, Ko, Emerson, et al., 2013; Bakhshipour &amp; Dezhkam, 2006). In this study, certain items of the PANAS were adapted to ensure that they were recognisable to younger participants; for example, “hostile” and “strong” were changed to “angry” and “happy”, respectively. The items “jittery”, “active” and “determined” were excluded as the researcher did not view them as relevant to the sibling relationship. Nevertheless, statistical analysis revealed that internal consistency remained, with the positive and negative affect scales showing high reliability in the current sample, Cronbach’s αnegative=.87; Cronbach’s αpositive=.93.&#13;
The BPI-S (Rojahn et al. 2001) is a psychometrically sound behaviour rating instrument (Rojahn, Rowe, Sharber, Hastings, Matson, et al., 2012; Mascitelli, Rojahn Nicolaides, Moore, Hastings et al., 2015) constituting a series of items referencing examples of CB. When completing the BPI-S, respondents consider whether a specific individual (in this study, participants’ sibling) engages in a behaviour, and then rate its frequency on a 1-to-6-point scale; corresponding to responses ranging from “never” to “daily”. The original BPI-S also contains a severity-rating subscale; however, this was excluded from the study, as rating the severity of behaviour was deemed to be too complicated for younger participants to judge. &#13;
The BPI-S contains questions relating to three types of problem behaviours: self-injurious, stereotypic, and aggressive/destructive behaviours. For the purposes of this study, the behavioural items of the BPI-S were grouped and scored according to whether they constituted IDB (i.e. self-injury) or EDBs (i.e. aggression and destruction). Items referencing stereotyped behaviour were excluded, as it was not possible to neatly categorise them into IDB or EDB. As an addition to the questions of the BPI-S, an opportunity for “free text” was included immediately after, whereby participants could describe any behaviours of concern that were not specified by the questionnaire and rate their frequency. Total scores for the BPI-S were obtained, as well as separate total scores for IDB and EDB frequency, whereby higher scores represent a greater number of incidences of CBs.&#13;
Lastly, the CBS (Dagnan et al., 2004) is a 15-item measure designed to capture participants’ perceptions regarding an individual’s (in this case, their siblings’) control over their CB. Responses are scored on a 1-to 5-point scale, corresponding to ‘disagree strongly’, ‘disagree slightly’, ‘unsure’, ‘agree slightly’ and ‘agree strongly’. Ten items are worded such that agreement reflects participants attributing high control over behaviour (e.g. ‘They are trying to wind me up’). In contrast, five items are phrased whereby agreement indicates participants attributing low control over behaviour (e.g. ‘They don’t mean to upset people’); thus, these items are reversed scored. A “total CBS” score was calculated for each participant, with higher scores pertaining to perceptions of greater control over behaviour. Moreover, Dagnan et al. (2004) report good internal reliability, with a Cronbach's alpha of 0.89.&#13;
Demographic information relating to participants’ age, gender, birth order (i.e. were they older/younger than their sibling) and the diagnosis of their disabled sibling (if their brother/sister was disabled) was collected prior to participants completing the questionnaires.  &#13;
Procedure&#13;
After receiving expressions of interest from prospective participants and confirming they had a sibling (with or without an ID), the researcher issued a participant information sheet detailing the nature and aims of the study. Both groups of siblings followed the same study procedure but received participant information sheets that were relevant to their role in the study. The researcher also provided a weblink to the online consent form hosted by Qualtrics. Once participants completed the consent form, they answered demographic questions and generated a participant code to ensure anonymity of responses. Participants were informed prior to the study commencing that they could withdraw at any time, either by closing the webpage or by contacting the researcher and asking to be removed from the dataset.&#13;
Initially, participants responded to items of the Close Friendship and Social Competence subscales of the Harter Self Perception Profile. Following completion of these questions, participants then completed the PANAS, BPI-S and CBS (in that order). Upon finishing the CBS, participants who had a sibling with ID proceeded to a debrief form that outlined the study in detail and provided contact information for support organisations (if needed following discussion of their encounters with CB). Control participants received a debrief form detailing their role in determining the baseline/typical sibling relationship.&#13;
The procedure differed slightly for participants aged under 16 years old. With one exception, who contacted the researcher directly (but ultimately could not participate due to lack of parental consent), this group expressed their desire to participate through their parents contacting the researcher. In response, the researcher sent a consent form for a parent/guardian to complete, giving their permission for their child to participate in the study. Two participant information sheets were also provided; one for parents and another simplified version of the adult participant information sheet for individuals under 16 years old. Once the researcher had received the completed consent form, the weblink to the study was emailed. It was stressed to parents that, though they may wish to support their child in understanding the questions of the study, they should refrain from guiding their child’s answers.&#13;
After clicking the weblink, younger participants completed an assent form and were informed about the participation withdrawal procedures, if required. The presentation of the questionnaires was the same as for those aged 16 years old and above. However, the debrief form was simplified in its language and content to ensure it was accessible to younger participants. Contact information for organisations who could support this group of participants specifically was also provided. Additionally, younger participants with a non-disabled sibling received a simplified version of the adult participant debrief form relevant to their role in the study. After reading the debrief sheet, all participants were given the opportunity to enter into a prize draw for a £20 Amazon voucher. The study lasted roughly 15-20 minutes. &#13;
All participant information sheets, consent forms and debrief sheets are listed in Appendices E – S. &#13;
&#13;
&#13;
Ethical Considerations&#13;
This study was reviewed and approved by the Psychology Department Research Ethics Committee at Lancaster University.&#13;
The topic of this study revolved around participants’ experiences of CB, which could involve reflection upon sensitive experiences (including those of violence and destructive behaviour) that elicit negative psychological reactions (such emotional upset, worry, stress, and shame). Furthermore, the minimum age specified for participants is 12 years old, so some participants recruited would be minors (i.e. a vulnerable participant group).&#13;
In case the discussion of CB experiences elicited negative psychological reactions in participants, contact information for sources of wellbeing support was given as part of the study debrief for both young and adult participants (e.g., talking to a trusted family member or a teacher; information and contact details for free services such as Childline, the Samaritans, The CB Foundation etc.). Offering access to support services was particularly important to younger participants, who may not feel able to speak to their parents about any issues they have.&#13;
Furthermore, consent was required from all participants over the age of 16. If a participant indicated being under the age of 16, consent was sought from a parent/guardian, whilst assent was obtained from all 12-to-15-year-old participants. Consent and assent were monitored throughout the study. All participants were given sufficient opportunity to understand the nature, aims and expected outcomes of research participation. Complex technical information was suitably adapted so that participants aged under 16 years old could give consent to the extent that their capabilities allowed. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1839">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1840">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1841">
                <text>LaverickBrown2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1842">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1843">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1844">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1845">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1846">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1847">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1848">
                <text>Chris Walton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1849">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1850">
                <text>Clinical</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1851">
                <text>Seventy-four typically developing individuals</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1852">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="78" public="1" featured="0">
    <fileContainer>
      <file fileId="33">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d6a054cf59a1bcc256a999da72fc52d5.pdf</src>
        <authentication>3aee7166ee8d4897862d4104e49ff70c</authentication>
      </file>
      <file fileId="34">
        <src>https://www.johnntowse.com/LUSTRE/files/original/66186b47da176ed6756ec7ba414f2cef.pdf</src>
        <authentication>1f20290d62202afd81e98b9478272af1</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1814">
                <text>The Development of an Attentional Bias toward Body Size Stimuli: Performance on a &#13;
Novel Stroop Task</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1815">
                <text>Raegan Bridget Cecilia Whitehead</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1816">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1817">
                <text>Distorted perceptions of body size have been identified and well-documented in eating disordered (ED) and eating-restricted populations, however, less is known about the development of this distortion. Research has employed Stroop food- and body-word tasks to investigate attentional biases towards semantically-related words and found a significant Stroop effect to such stimuli in ED, and sub-clinical, cohorts. The Size Congruity Effect (SiCE) has confirmed the perception of inanimate object size, however such an effect has not yet been studied in regards to body size specifically. This study recruited a novel Stroop size task to measure the perception of conceptual body size versus physical object size in four developmental age groups (Child, Adolescent, Young Adult and Adult). The Body Satisfaction Questionnaire (BSQ-34) was also taken as a measure of body dissatisfaction in participants over the age of 18. Findings indicate that a significant attentional bias towards body size is present across all age groups, but is most prevalent in adolescent and young adult participants. These findings imply that cognitive interference towards body size stimuli is not only present in the typical population, but is also present in children from aged 7. Body dissatisfaction, measured using the BSQ-34, did not have a significant effect on Stroop interference scores, suggesting that dissatisfaction with one’s own body does not implicate perception of others body size. The findings contribute to the fields understanding of body size misperception throughout typical development, the results also infer that body size perception is special, and not processed in the same way as inanimate size.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1818">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1819">
                <text>Participants&#13;
Eighty-eight participants (N = 88) were recruited to participate in this research. The participants (35 males, 53 females) were aged between 7 and 59 years (Mage = 23.38, SDage = 14.34). Participants were divided into one of four groups, dependent on their chronological age.&#13;
Child group. Child participants (N = 24, 8 male and 16 female), aged between 7 and 11 years (Mage = 10.04, SDage = 1.23), were recruited from St Boniface RC Primary School, Salford. A minimum participation age of 7 years was enforced for this experiment as previous research has not identified a consistent Stroop effect with younger children (Comalli et al., 1962). Parental consent was obtained prior to the research and participant assent was obtained on the day of testing. Five participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. Five participants reported having a specific learning difficulty (SLD); three participants had dyslexia, one participant had dyspraxia, one participant had attention deficit disorder (ADD) and one participant had attention deficit hyperactivity disorder (ADHD). One participant was on the autism spectrum (ASD). All participants with additional needs were performing well in mainstream school and were therefore considered able to participate in this research. Twelve participants had white British or white Irish ethnicity. Three participants had white European ethnicity.  Three participants had black African ethnicity. Three participants had mixed or multiple ethnicities. One participant had Chinese Asian ethnicity. One participant had Irish traveller ethnicity. Two participants spoke English as a second language, however both were fluent English speakers. Each child received a reward sticker for their participation.&#13;
Adolescent group. Adolescent participants (N = 22, 9 males, 13 females), aged between 13 and 16 years (Mage = 14.73, SDage = 1.12), were recruited through opportunity sampling. Social media posts were used to advertise the study, as well as word of mouth. All participants were recruited from Greater Manchester. Parental consent was obtained prior to testing and participant assent was obtained on the day of testing. One participant was colour blind. Six participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. Four participants reported having a SLD; two participants had dyslexia, one participant had dyslexia and dyscalculia and one participant had dyslexia and ADHD. All participants with SLD’s were performing well in mainstream school and were therefore considered able to participate in the research. Eighteen participants had white British or white Irish ethnicity. Two participants had black African ethnicity. One participant had mixed or multiple ethnicities. One participant had British and Chinese ethnicity. &#13;
Young Adult group. Young Adult participants (N = 22, 7 male and 15 female), aged between 22 and 33 years (Mage = 25.86, SDage = 2.34), were recruited through opportunity sampling. The researcher utilised social media, approached classmates in Lancaster Univeristy’s Psychology Department and workplace colleagues to participate in the research. All participants were recruited from the North West of England. Each participant provided their informed consent prior to the research. Six participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. Two participants reported having a SLD; one participant had dyslexia and one participant had ADD. Both participants reported after testing that they were able to complete the task with no additional difficulty as a result of their SLD. Fifteen participants had white British ethnicity. Five participants had white European ethnicity. One participant had white American ethnicity. One participant had mixed or multiple ethnicities. Three participants spoke English as a second language, however both were fluent English speakers.&#13;
Adult group. Adult participants (N = 20, 11 male and 9 female), aged between 37 and 59 years (Mage = 45.75, SDage = 8.27), were recruited through opportunity sampling. Social media posts were used to advertise the study, as well as word of mouth. All participants were recruited from Greater Manchester. Each participant provided their informed consent prior to the research. Ten participants required glasses to correct their eyesight and were permitted to wear these throughout the testing period. One participant had dyslexia. This participant reported after testing that they were able to complete the task with no additional difficulty as a result of their SLD. Fourteen participants had white British ethnicity. Three participants had white European ethnicity. Two participants had black Caribbean ethnicity. One participant had mixed or multiple ethnicities.&#13;
Three participants were removed from the data sample due to a high number of errors. The responses from 85 participants were subsequently included in the data analyses. &#13;
&#13;
This study received ethical approval from Lancaster University’s ethics committee on&#13;
1st May 2018.&#13;
&#13;
Materials&#13;
Task. The novel Stroop task was created using Psychopy, an open source Python –based programme used to run psychological experiments (Peirce, 2007, 2009). In the task participants were presented with computer-generated images of female bodies. Each body was individually presented on the screen and remained there until the participant made their screen size selection. One hundred and eight images were presented in total; 54 in the congruent trial and the same 54 in the incongruent trial. Eighteen unique images were presented three times, each time the screen size of the image was varied in order to ensure all 18 images were presented in all 3 screen sizes. The 18 images consisted of three model types (See Figure 1) which were used to represent the polarities of body size (3 small body sizes, 3 large body sizes; see Figure 2). &#13;
Figure 2. An image to show the body ‘models’ used in the experiment. Row 1 Left – Right: Model 1, Model 2, Model 3, Model 4. Row 2 Left – Right: Model 5, Model 6, Model 7, Model 8.&#13;
&#13;
The first testing phase of the Stroop task consisted of the individual presentation of 54 stimuli, these stimuli were presented with congruent screen and body sizes; all stimuli presented with a small screen size (10 x 4cm, 11 x 4.4cm, 12 x 4.8cm) contained a small body size, all stimuli presented with a large screen size (21 x 8.4cm, 22 x 8.8cm, 23 x 9.2cm) also contained a large body size. The second testing phase of the Stroop task consisted of the individual presentation of the same 54 stimuli as the first phase. These stimuli were presented with incongruent screen and body sizes; all stimuli presented with small screen size contained large body size, all stimuli presented with large screen size contained a small body size. See Appendix A for screenshots of the Stroop task, demonstrating the congruent and incongruent presentation of the stimuli as described here. The order of stimulus presentation was pseudo-randomised within Psychopy, so that each individual image was presented only once per participant. Randomising the order of stimulus presentation assured that participants were not subjected to order effects (Shaughnessy, Zechmeister, &amp; Zechmeister, 2006).&#13;
Participants were instructed to determine the screen size of each stimuli and respond as quickly and accurately as possible using the keyboard keys indicated to them in the instruction phase. The relevant keyboard keys (A and L) were indicated with white stickers on the external keyboard. Key allocations (e.g. A = Small, L = Big) were also visible on screen throughout the task, see Appendix A for screenshots of the task. Participant response times were recorded within Psychopy and exported to Microsoft Excel. The task was presented on a Toshiba Satellite Pro laptop computer with a 15.6-inch HD non-reflective display with a 16:9 ratio and LED backlighting. &#13;
&#13;
Body stimuli. Eighteen images of computer generated semi-nude female bodies, ranging in body size and physical appearance, were used in the current study. These were created and donated to the researcher by Dr Martin Tovee, body size perception researcher, for the purpose of the current experiment. The bodies ranged in size from ‘emaciated’ to ‘overweight’, the variations in body size were visually distinguishable (see Figure 1), Eight ‘models’ were created, each with variations in physical appearance including hair colour and style, skin tone, facial features and eye colour (see Figure 2). All bodies were presented in a forward facing 0o pose, in order to eliminate visual preference or difficulties in comparing stimuli. Image size was manipulated as a factor of the experiment; to reflect ‘small’ screen size all images were presented at 10 x 4cm, 11 x 4.4cm and 12 x 4.8cm. To reflect ‘big’ screen size all images were presented at 21 x 8.4cm, 22 x 8.8cm and 23 x 9.2cm. These sizes were chosen as they created incremental differences in screen size that were visually distinguishable, as can be seen in Appendix B.&#13;
Figure 2. An image to show the body size increments in the stimuli provided by Dr. Tovee. Model 3 is used to illustrate the size increments. Row 1 Left – Right: Size 1, Size 2, Size 3, Size 4, Size 5. Row 2 Left – Right: Size 6, Size 7, Size 8, Size 9, Size 10. For the purpose of the current experiment, sizes 1, 3, 4, 7, 8 and 10 were used as body size stimuli as these bodies had the largest size variation when visually scrutinised.&#13;
&#13;
Questionnaires. All participants were required to complete a demographic questionnaire, see Appendix C. The parent/guardian of a participant under the age of 16 was required to complete this questionnaire on behalf of the participant. This questionnaire was used to ascertain factors which may affect a participants ability to successfully complete the Stroop task.&#13;
Participants over the age of 18 years were also required to complete the Body Shape Questionnaire (BSQ-34; Cooper, Taylor, Cooper &amp; Fairburn, 1987). The BSQ-34 is a 34-item scale which measures participants feelings toward their own weight and body shape (Taylor, 1987). For example; ‘Have you been afraid that you might become fat (or fatter)?’ and ‘Has seeing your reflection (e.g. in a mirror or shop window) made you feel bad about your shape?’. Each item of this scale is scored on a six-point Likert scale, ranging from 1 (never) to 6 (always). BSQ-34 scores are totalled using Likert scale points; a score less than 80 indicates no concern with shape, a score between 80 and 110 points indicates a mild concern with shape, a score between 111 and 140 indicates moderate concern with shape and a score of 140 and above indicates a marked concern with shape (Cooper et al., 1987). The BSQ-34 was originally intended for use with female participants; the authors have since approved changes to items 9, 12 and 25 for use with male participants, this version was provided for male participants in the current experiment. The BSQ-34 was not considered suitable for participants under the age of 18 due to the explicit mention of clinically salient stimuli. &#13;
The BSQ-34, as well as participants consent forms and demographic questionnaires, were provided to participants on Adobe Fill &amp; Sign using an Apple Ipad and touchscreen pen. All participants indicated daily or weekly use of a touchscreen and/or computer.&#13;
&#13;
Design&#13;
Variables. The dependent variable in this study was task response time, recorded by Psychopy in milliseconds. Mean response times (MeanRT) were calculated for the congruent and incongruent trials, per participant. An interference score (incongruent MeanRT minus congruent MeanRT) was also calculated for each participant. The dependent variables of MeanRT and Interference Score were both used in the current data analyses. The independent variables in the study were; AgeGroup and Congruency.&#13;
AgeGroup. This was a between subjects factor. Participants were placed into one of four age groups, based solely on their chronological age.&#13;
Congruency. This was a within subjects factor. All eighty-eight participants completed the same novel Stroop task, containing both congruent and incongruent trials. The order of trial presentation were randomised for each participant.&#13;
&#13;
Procedure&#13;
Three months prior to testing, the parents/guardians of children in years five and six of St Boniface RC Primary School, Salford were contacted and given the opportunity for their child to participate in this study. The children of parents/guardians who returned the consent form and completed questionnaire were able to participate. The research was also advertised, via social media and word of mouth, to potential participants. The parents/guardians of participants under the age of 16 years, and participants over 16 years, were provided with an information letter, consent form and demographic questionnaire (See Appendices C, D and E).. Those who responded with a complete consent form and questionnaire participated in the research.  &#13;
Participants were individually invited to complete the procedure in a small quiet room. All participants were seated at a desk in front of the testing laptop and an external keyboard, see Figure 3 for the testing set-up. Participant consent, and child assent, was ascertained once the participants were seated. All participants were encouraged to ask any questions they had and child participants were reminded that they could return to their class at any time, without providing a reason. Once the preliminary period was completed participants were then asked to complete the computerised Stroop task. &#13;
The task was visible on the screen prior to each participant entering the room. Participants were aided through the initial instruction screens of the task and encouraged to stop and ask questions at this stage. The researcher read all instructions to participants under the age of 16, and to any participant who requested that the instructions be read to them. The task then contained two practice trials, in order to ensure that participants understood their role in the task. All participants were able to complete the two test trials without difficulty and were therefore permitted to complete the rest of the task. The researcher left the room and waited nearby for all participants over the age of 16 years, the researcher remained in the testing room for younger participants. Participants were informed that they should only take a break when they reached an instruction screen as their times were being recorded on all testing screens.&#13;
Participants were asked to alert the researcher once they had completed all stages of the computer task and reached the end screen. Child participants were given an envelope containing a parental debrief and escorted back to their classroom. Young adult and adult participants were asked to complete the BSQ-34 (Cooper et al., 1987) using Abobe Fill &amp; Sign on an Apple Ipad. The BSQ-34 was provided after the task as Davison and Wright (2002) reported that this method reduced demand characteristics in a similar study. Upon completion of the testing period all participants were thanked for their time and provided with a debrief sheet as well as help and information pertaining to eating disorder or body anxiety concerns. Child participants were rewarded with a sticker for completing the task. Please see Appendix F for the participant debrief.&#13;
Each participants response times were recorded in an Excel document which was then encrypted and saved to the researchers password protected laptop. All data was also stored on an encrypted external hard-drive, this copy of the data will be securely destroyed upon completion of the data analyses. &#13;
Figure 3. A photograph to show the testing set-up used in the current study. Note, participants were encouraged to adjust their seat height to remain at a ninety-degree angle to the screen. The testing set-up was replicated for all eighty-eight participants to ensure continuity.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1820">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1821">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1822">
                <text>Whitehead2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1823">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1824">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1825">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1826">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1827">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1828">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1829">
                <text>Dr Michelle To</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1830">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1831">
                <text>Cognitive, Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1832">
                <text>Eighty-eight participants </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1833">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="77" public="1" featured="0">
    <fileContainer>
      <file fileId="32">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e46e8d20a4047d694e440d515b4cd3c7.pdf</src>
        <authentication>c117c44603181de41daef23e2c8092e5</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1794">
                <text>Infant Gesture and Parent Knowledge of Development</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1795">
                <text>Miranda Sidman </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1796">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1797">
                <text>Background: Before children can communicate verbally, they use gesture to tell us what they want. Our understanding of the importance of gesture in language development has expanded greatly over the past few decades. Furthermore, the methods used to measure gesture and language development have also progressed. Gesture and language assessment rely heavily on parent reports. It has been suggested that what parents know about development has also consequences for their child’s developmental outcomes. &#13;
&#13;
Aims: To validate the gesture section of the UK-CDI Words and Gestures (Alcock, Meints, &amp; Rowland, 2017). And to explore parent knowledge of language and gesture milestones &#13;
&#13;
Methods &amp; Procedure: Twenty-seven children and their parents participated in the first experiment. The parents completed the UK-CDI W&amp;G and the children participated in an in- person gesture validation task. Thirty parents with a child 8-18 months participated in the second experiment. They completed the UK-CDI W&amp;G as well as our new parent knowledge questionnaire. &#13;
&#13;
Results: In Experiment one, children’s score from the gesture task correlated significantly with parent-reported scores on the UK-CDI W&amp;G. In experiment two, parents were more accurate at ordering and estimating the age of language milestones than they were gesture milestones. &#13;
&#13;
Conclusions: The findings for experiment one provides more support and confidence for the UK-CDI W&amp;G as a language assessment tool. This will provide and benefit researchers and clinicians with a standardised tool and method for assessing language norms and delays. The findings for experiment two inform us that parents are not that knowledgeable within the developmental domain of gesture. This provides us with information on where parents need to be educated to benefit the developmental outcomes of their children. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1798">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1799">
                <text>Experiment 1  &#13;
&#13;
Method  &#13;
&#13;
In this experiment we attempted to validate the gesture section of the UK-CDI Words and Gestures questionnaire through responses to the questionnaire and with an in-person gesture task procedure.  &#13;
&#13;
Participants  &#13;
&#13;
Twenty-seven children and their parent participated in this study. Participants included 10 girls and 17 boys between eight and eighteen months (M= 12.5 months, SD= 2.3 months) who were recruited from the Lancaster University Babylab and through social media (e.g. Facebook). The parents who participated in this study were 26 mothers and one father. To be eligible for this study all participants had to be native British English speakers. All participants were self-selected and received a children’s book for participant payment.  &#13;
&#13;
Apparatus and Materials  &#13;
&#13;
UK-CDI Words and Gestures  &#13;
&#13;
The UK-CDI Words and Gestures (Alcock et al. 2013) is a parent-report questionnaire used to assess the language development of children aged eight to 18 months old. This questionnaire offers a checklist of words from several different categories (e.g., animals, toys, household items), with a total of 395 words. Parents are asked to indicate whether their child can say and understand, just understand, or does not know a word. The child obtains a score for total comprehension (sum of the words they understand) and a total score for production (sum of the words they say and understand). There is also a gesture section consisting of 57 gestures. The gesture section is divided into subsections (e.g., first communicative gestures, games, actions, pretending to be a parent, and imitating other adult actions). In the First Communicative Gesture section, parents are asked to indicate whether their child does a gesture often (two points), sometimes (one point), or not yet (zero points). For the remaining sections parents are asked to tick yes or no if their child does a gesture. A total gesture score is calculated by taking 0.5* the First Communicative Gesture section score and is then summed with the total number of Yes scores from the remaining sections. See Appendix D for full UK-CDI W&amp;G questionnaire.  &#13;
&#13;
Gesture Task  &#13;
&#13;
The gesture task used in this study was constructed by (Alcock et al. 2013) to establish content validity of the gesture scale on the UK-CDI W&amp;G. The gesture task consists of 10 gesture items taken from the gesture section of the UK-CDI Words and gestures. The items range from low frequency items (e.g., ‘can you give me a high five?’), medium frequency items (e.g., ‘can you put on a hat?’), and high frequency items, (e.g., ‘Can you feed the teddy/dolly?’). The stimuli were nine children’s toys required for the items on the gesture task. See Appendix B.  &#13;
&#13;
Procedure  &#13;
&#13;
Participants were asked to complete the UK-CDI Words and Gestures (Alcock et al. 2016) prior to the home visit. Participants were sent the UK-CDI Words and Gestures via an electronic link. Upon completion of the UK-CDI a home visit was scheduled and took place in each participants home. The task was administered by the researcher in a quiet room with the child and parent. Prior to the gesture task being administered, parents were pre-warned of the procedure and were told to not repeat instructions during the gesture task until cued by the researcher. Participants were asked each item first without any demonstration or cueing. If there was no response the researcher would demonstrate the gesture and say, ‘Can you show me the (x)?’. If there was still no response the parent was asked to demonstrate the gesture. (See Appendix B and C for gesture task procedure and list of stimuli). Each participant was recorded for approximately 45 minutes.  &#13;
&#13;
Scoring  &#13;
&#13;
For the gesture task, participants were scored for 30 minutes. Any time the participant was out of the cameras view or was not cooperating was not included in the video analysis. For each item on the gesture task participants scored two points for completing a gesture on their own, one point for completing a gesture after a demonstration, or zero points for not completing the gesture. Participants were also observed and scored for any spontaneous gestures exhibited during the scored time. Spontaneous gestures included any gestures exhibited by the participant that are on the UK-CDI W&amp;G questionnaire but weren’t on the gesture task. Spontaneous gestures observed during the home-visit were given a score of one if they did it or zero if they did not.  &#13;
&#13;
Inter-rater Reliability  &#13;
&#13;
Each video was scored twice by the researcher and scored a third time by another masters student at Lancaster University. The second scorer was briefed on the nature of the videos, the UK-CDI W&amp;G questionnaire, the gesture task, and was familiar with the content of the study. The agreement level was calculated using, Percent agreement= (agreements/ (agreements + disagreements)) x100. The two scorers reached an agreement level of 94%.  &#13;
&#13;
Experiment 2 Methods &#13;
&#13;
This experiment was investigating what parents know about language and gesture development using two online questionnaires. &#13;
&#13;
Participants &#13;
&#13;
Thirty parents with a child between the ages of eight and 18 months participated in this study. All participants who participated were mothers. Participants were recruited through the Lancaster University Babylab and through social media advertisements for the study. To be eligible for this study participants had to be native British English speakers. All participants who completed the study were entered in a draw to win a £20 Amazon gift voucher. &#13;
&#13;
Apparatus and Materials &#13;
&#13;
UK-CDI Words and Gestures &#13;
&#13;
The same version of the UK-CDI Words and Gestures (Alcock et al. 2016) was used in the second experiment. &#13;
&#13;
Parent Knowledge Questionnaire &#13;
&#13;
The researcher constructed a questionnaire to investigate what parents know about language and gesture development. The format of the questionnaire was based on a previous study investigating what mothers know about play and language development (Tamis- LeMonda et al. 1998). The questionnaire consisted of 11 language items and 11 gesture items. The researcher used a paired-comparisons procedure to match each item in the respective domain (language or gesture) with the remaining items, resulting in 55 pairs for language and 55 pairs for gesture. All pairs were randomized and presented in a left-right alignment. Participants were asked to select the item they believed to be more difficult and to occur at a later age. Following the paired-comparisons task, the same 11 language and gesture items were used on an age checklist and randomized. Participants were then asked to estimate the age each milestone emerged. See Appendix E for full questionnaire. &#13;
&#13;
Language and Gesture Scales &#13;
&#13;
The language and gesture items were chosen based on empirical findings about language and gesture development in the literature, and the previous work of Tamis-Lemonda et al. (1998). The language items gradually increased in sophistication from level one to level 11. Levels one through four represented prelinguistic communication from nondiscriminant cooing to requesting a target object. Level five through seven represented single-word utterances, from imitation to expressing possession. Levels eight to 11 represented multi-word utterances, from expressing concrete desires to then expressing memories and emotions. &#13;
&#13;
The gesture items were taken from the UK-CDI W&amp;G (Alcock et al. 2016) gesture section. Items were selected to ensure the full age range of eight to 18 months was represented. &#13;
&#13;
Procedure &#13;
&#13;
Participants were sent two links to complete the UK-CDI W&amp;G questionnaire and the Parent Knowledge questionnaire. For the UK-CDI W&amp;G questionnaire, participants were instructed to indicate whether their child could understand and say, just understand, or could not understand a word. Participants were also instructed to indicate if their child could complete a gesture or not. Upon completion of the UK-CDI W&amp;G participants were then instructed to complete the Parent Knowledge Questionnaire. &#13;
&#13;
The first task on the parent knowledge questionnaire included 11 language and 11 gesture items which rendered 55 paired comparisons (in each domain). Participants were asked to select the item in each pair they believed to be more difficult, that is, to occur later in development. Following the paired comparisons task, participants were then given the 11 language and 11 gesture items individually (and randomized) and were asked to estimate the &#13;
&#13;
age they believe each milestone first occurs. From these procedures, the researcher calculated the parents’ accuracy at judging the difficulty of language and gesture items by correlating their ordering of items with the empirical scales using Spearman rho. Four accuracy scores were calculated for each participant: those obtained from paired-comparisons tasks for language and gesture separately and two age estimation accuracy scores from the language and gesture age checklists. The researcher also calculated two discrepancy scores for each participant, one for language and one for gesture. Each score estimated how discrepant parents’ judgements of age onsets were; These values were computed by summing the absolute differences between parents age estimates and the empirical ages of onsets as stated in the literature. &#13;
&#13;
Ethics &#13;
&#13;
After reading information about the study, parents ticked a box to give their consent to participate in this study. Ethical approval for the study was obtained from the Lancaster University Research Ethics Committee. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1800">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1801">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1802">
                <text>Sidman2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1803">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1804">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1805">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1806">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1807">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1808">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1809">
                <text>Dr. Katie Alcock</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1810">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1811">
                <text>Clinical, Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1812">
                <text>Twenty-seven children</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1813">
                <text>Correlation, psychometrics, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="76" public="1" featured="0">
    <fileContainer>
      <file fileId="30">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4e3a3385ed408600eae4500b535495c8.pdf</src>
        <authentication>77939218cb4037e3126cc7d4f2cc61c7</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1775">
                <text>Cortical Hyper Excitability correlating with Visual Distortions and Hallucinations</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1776">
                <text>Nishtha Bakshi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1777">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1778">
                <text>The primary focus of our study concerned how abnormalities in visual experiences such as visual distortions or hallucinations result in increase in cortical hyper-excitability in the non-clinical population. Aberrant neural processes leads to anomalous experiences. Susceptibility to such visual distortions reflects elevated levels of cortical hyper excitability. On the account of methodology, Forty-eight non-clinical individuals completed the "Pattern Glare Task" where they viewed certain striped grating patterns with different spatial frequencies. The non-clinical participants also completed the Cortical Hyper-excitability Index (Chi) and the Cambridge Depersonalization Scale (CDS). From the analysis, Pattern glare task performance showed that individuals experienced more visual distortions in the Medium Frequency (3cpd). The CDS and Chi results only confirm our study further. Conclusively, the study suggests that members of the non-clinical population do experience a certain level of increase in cortical hyper-excitability. It establishes the utility of pattern glare with regards to CHi and CDS to add to our existing knowledge. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1779">
                <text>Introduction&#13;
The major objective of this study is to understand the relationship between the cortical hyper-excitability and the various visual hallucinations or distortions in the non-clinical population. The major research question is to understand how the aberrant neural processes lead to anomalous experiences. This section investigates into the methodology that has been used to investigate and validate hypotheses postulated by the research question of this project. The participants for this study are 48 non-clinical constituents. These participants were tested with the Pattern Glare Task, the Cortical Hyper-Excitability Index and the Cambridge Depersonalization Scale.&#13;
Participants&#13;
Forty-eight numbers of individuals, undergraduates and postgraduates ageing between 21 and 33 from were recruited for the experiment via random sampling. The mean age of the participants was 24. Out of these, 30 (62%) were male participants while 18 (38 %) were the number of female participants. None of the individuals reported any medical history of seizures, photo sensory epilepsy or were diagnosed with migraine. Individuals suffering from migraine, migraine (aura) or photosensitive epilepsy were excluded from the study. &#13;
Materials&#13;
Pattern Glare Test&#13;
The pattern glare task includes stripy patterns on three separate cards each with different spatial frequencies; low spatial frequency baseline grating (approx. 0.5 cycles per degree), high spatial frequency baseline grating (approx. 12 cpd), and the crucial medium spatial frequency grating (approx. 3 cpd). The computerised version of the pattern glare task was modified for this experiment, as we were using a paper-based version (Wilkins, 1995; Wilkins et al., 1984) for the same. The stimuli used in the experiment are given in Figure 1. The individuals are asked to stare at the white dot in the center of each pattern for approximately 10-15 seconds, while holding each pattern at arm's length. Following, a series of questions are asked to the participant i.e. if they experienced any blurring of lines, bending of lines, and fading, shimmering, flickering or shadowy shapes. The participants on the basis of their experience on viewing each pattern, rate the above questions from a score of 0-7 where, 0-minimum and 7-maximum (Wilkins et al., 1984; Conlon et al., 1999). The score is obtained for each pattern and the difference between Pattern 2 and Pattern 3 is recorded, which is called as the '3-12 difference'; in other words, the difference between high frequency and the medium frequency (3cpd – 12cpd). &#13;
&#13;
&#13;
 Cambridge Depersonalization Scale&#13;
The CDS is a self-reporting questionnaire and is used to measure the duration and frequency of any depersonalization symptoms that individual experiences in the time frame of the past six months (Sierra and Berrios, 1999). The CDS is an instrument containing 29 items. Each of the items in the scale are rated on the basis of Likert-scale both for frequency (0-4; where, 0=never, 1=rarely, 2=often, 3=very often, and 4=all the time) and duration based on its average on how much the experiences last (1-6; where 1=few seconds, 2=few minutes, 3=few hours, 4=about a day, 5=more than a day, and 6=more than a week). Its global score is the sum of all items (0-290). Sierra et al., (2005) established four well determined factors to dictate the different symptoms of depersonalization as single or underlying dimensions they were ‘Anomalous Body Experience’, ‘Emotional Numbing’, ‘Anomalous Subjective Recall’, and ‘Alienation from Surroundings.’ This questionnaire addresses the complexity of depersonalization and uncovers its symptoms, which can be directed towards distinct psychopathological domains. &#13;
Cortical Hyper excitability Index&#13;
The CHi was designed to provide an index that discovers the visual irritability, discomfort and the associated visual distortions that individual’s experience (Braithwaite, Merchant, Dewe and Takahashi, 2015). The above-mentioned experiences are well linked to the increase of cortical hyper excitability. A major advantage of the CHi’s design is that it unveils three broad factors which are (1) heightened visual sensitivity and discomfort, (2) negative aura-type visual aberrations, and (3) positive aura-type visual aberrations. The items present in the questionnaire picture a vast selection of visual experiences (sensitive to external sensory information for e.g. lights, patterns; certain environment is uncomfortable for the individual; dizziness/nausea; discomfort/ irritation from reading a certain font or style of writing etc) that have been previously reported through hallucinations based experimental studies on patients, control groups, non-clinical populations; aura and its underlying dimensions. The CHi uses a fine-grained 7-point Likert response scales, where in the test each question has two response scales i.e. frequency (1-7; where 1=not at all frequent and 7=very frequent) and intensity (1-7; where 1=not at all intense and 7=extremely intense). In terms of scoring, both the scales are summed to provide an overall CHi index for each question. However, a value of 1 is subtracted from each response on frequency and intensity, as the scale was transformed from 1-7 to a 0-6 Likert-scale. This was done for individuals who responded with 1 in every question would still have a score of 54. &#13;
Design and Procedure&#13;
All the participants were forwarded a brief explanation about the purpose of the study and how they can contribute to it. If the participants agree, later schedule a time for the voluntary study. The experiment was conducted in the Social Hub of the Graduate College, Lancaster University. The participants were seated comfortably on the right side of the researcher. The individuals were asked to read the Participant Information sheet carefully, later if they agree; they may sign their respective consent form. It was made clear to the participants that the confidentiality of their personal information will be ensured and that they could at any point (1) can ask questions during the experiment, (2) stop the experiment, if they are uncomfortable at any point during the conduction (3) participants have the right to withdraw themselves from the study with no further adverse consequences however, they need to inform the researcher about this via email. Participants were again asked if they suffered from any neurological disorder specially migraine, migraine (aura), or photo sensory epilepsy and if they had any severe incidences of alcohol and drug abuse. &#13;
The first phase of the experiment included the pattern glare task. Individuals were handed over with the first pattern with low frequency (LF) and were asked to stare at the white dot in the center of the pattern for 10-15 seconds. After this, they were asked to rate the questions based on their experience on a scale of 0-7 (0-minimun, 7-maximum). The questions included if they experienced any blurring of lines, bending of lines, shimmering or flickering, fading or if they could see any shadowy shapes. Before handing over the second pattern, it was made sure that the participant is comfortable with proceeding further with the experiment and is not experiencing any kind of visual stress. The same steps were repeated for both the other two patterns with medium frequency (MF) and high frequency (HF). &#13;
The order in which the participants viewed the patterns was randomized for each one. Individuals who are prone to pattern glare can be quantified for such a criterion based on their sum of distortions in 3cpd (MF) or as the difference between 3 and 12 cpd, also called the '3-12 cpd difference'. After a two-minute break, the second phase of the experiment included participants to answer 29 questions on the Cambridge Depersonalisation Scale, which are based on the frequency and duration of any 'strange or funny experiences' that they felt in the past six months. Lastly, the third phase, the second questionnaire was introduced to the participants, the Cortical Hyper Excitability Index. Similar to the patterns, the questionnaires presented to the participants were also randomised in order to obtain a variety in the responses of the participants. The total time taken to conduct the experiment was about 20 minutes or less. Post conduction, the individuals were thanked for their time and effort.  &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1780">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1781">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1782">
                <text>Bakshi2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1783">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1784">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1785">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1786">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1787">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1788">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1789">
                <text>Dr Jason J Braithwaite</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1790">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1791">
                <text>Clinical Psychology&#13;
Neuropsychological</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1792">
                <text>48 Participants (30 males and 18 females)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1793">
                <text>Correlation&#13;
Multiple Regression&#13;
ANOVA&#13;
Exploratory Factor Analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="75" public="1" featured="0">
    <fileContainer>
      <file fileId="29">
        <src>https://www.johnntowse.com/LUSTRE/files/original/433bc8b147842b22913688daad5b82c3.pdf</src>
        <authentication>cd8e35e608f8c4e794a24714ed2ede85</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1756">
                <text>Accessing Cortical Hyperexcitatbility and Its Predisposition Using Two Types of Measurements</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1757">
                <text>Flora Zuo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1758">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1759">
                <text>This study aimed to explore in depth about the cortex hyperexcitability. In order to do so, the study will use the pattern glare task and three questionnaires. These three questionnaires include the Cortex Hyperexcitability Index II, Cardiff Anomalous Perceptions scale, and the Multi-Modality Unusual Sensory Experiences Questionnaire. The pattern glare task induces on-spot hallucinations and distortions, while the questionnaires measure the long-term daily unusual sensory experiences one may have experienced. In this study, both the questionnaires and the task measured the same underlying factor, the cortex hyperexcitability. In the sense that it was hypothesized that the predisposition of seizure-like hallucinations and distortions and of daily-based hallucinations and anomalous experiences should be associated in a particular way. The pattern glare task had two blocks in the experiment, one with a blindfold and the other without. They were presented to participants in different orders to counterbalance the order effect. In between the two blocks, the participants answered the three questionnaires. The result of the study showed no significant effect of the blindfold, suggesting that wearing the blindfold for five minutes neither increased the sensitivity of the eyes nor the visual cortex. Most of the relationships between the pattern glare and questionnaires failed to be significant. The investigation on the association between the predispositions of the two types of hallucinations also failed to show any significance, only MUSEQ and pattern glare has a significant correlation. The migraine and migraine with aura groups appeared to be more sensitive to the phosphene phenomena. Their sensitivity, though the results were not significant, could be clearly observed through descriptive statistics. Although the results and findings failed to prove the research hypothesis, probably due to the main limitation of the poorly presented stimuli, the current study to some extent was able to expand the current understanding of cortex hyperexcitability demonstrated by the previous works, and further offered more possibilities for future studies.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1760">
                <text>Along with the pattern glare task, there were three more questionnaires used in the study, these are MUSEQ (Mitchell et al., 2017), CAPS (Bell et al., 2006), and CHI II (Fong et al., in press). This study has been ethically approved by the Department of Psychology in Lancaster University on 11th May 2018. &#13;
Participants&#13;
The current study screened participants before they could take part in the experiment, the screening standard is whether they have been diagnosed with photosensitive epilepsy, epilepsy, or that they recently had a brain or eye surgery. This criterion was created as the viewing of the striped pattern of particular spatial frequencies may induce seizures in patients with photosensitive epilepsy (Wilkins et al., 1984).&#13;
It turned out none were excluded due to disease or had a history of diseases. There was a total of 43 participants who took part in the study. Among them, 15 were males and 28 were females. The age ranged from 19 to 36, with a standard deviation of 2.92, and around half of the participants were native English speakers. The six participants who self-reported having a migraine or a migraine with aura were noted before the study, as the pattern glare task may induce or intensify their symptoms, which can cause visual discomfort, visual distortions, or a headache. Among the six participants who reported they had migraineur, three of them were migraineurs with aura.	&#13;
Stimuli and Procedure&#13;
The current study used stimuli that were printed onto cards, and the stimuli were presented to the participants from around 50 cm away at eye level. The patterns were the same size at 20mm * 15 mm, all in black and white, and with the shape of the ellipse. According to the given conditions, the visual angle was calculated to be 12.84 degrees. The three questionnaires were all printed on paper, and the participants were asked to read aloud their answers instead of writing it down. The plain black blindfold which participants were asked to wear during the study was bought from the drugstore.&#13;
Material&#13;
There were three different patterns used in this study, the spatial frequency gratings for these patterns are 11 cpd (cycles-per-degree), 3 cpd, and 0.7 cpd respectively. All the patterns were achromatic, with a fixation dot in the centre of them. After each of the stimulus was presented, there were 17 questions which the participants had to answer. The questions asked about the intensity of the anomalous visual phenomena, the types of visual hallucinations, and whether they have a headache or dizziness after the experiment. The materials were adapted from the previous works of Braithwaite et al. (2014). The three questionnaires in between the two blocks of stimuli presentations were MUSEQ, CAPS, and CHI II. MUSEQ (Mitchell et al., 2017) has 43 items of six factors, including Auditory, Visual, Olfactory, Gustatory, Bodily sensations, and Sensed presence; the measurement is a five-point Likert scale which targets the frequency of USE. CAPS (Bell et al., 2006) has 32 items, also addressing the anomalous experiences from different modalities. For each item, if the participants confirmed that they have had related experiences, they then rated their experiences out of three five-point scales on distress, intrusiveness, and frequency. CHI II has 30 items, and each one will be questioned about its frequency and intensity. The measurement is a seven-point Likert scale, with zero as never or not intense and six as all the time or extremely intense. The questionnaire is the recently updated version of the original CHI, and the 30 items in it can be loaded onto three non-overlapping factors, includes heightened visual sensitivity and discomfort (HVSD), aura-like hallucinatory experience (AHE), and distorted visual perception (DVP). &#13;
For the MUSEQ and CAPS, the original unrevised questionnaire was used during the experiments, however, only parts of the answers given was used in the analysis.  This decision was made as the data analysis would be too complicated to take all the factors into consideration, especially when they are just partially related to the research question. Therefore, for the MUSEQ questionnaire, only Visual, Auditory, and Bodily modality were analysed, and for CAPs, the primary concern is exclusively about the temporal lobe experience factor.&#13;
For the non-blindfold block, all three stimuli were presented; but for the blindfold block, only the medium and high CPD stimuli were included. The low frequency stimulus is excluded because it was too mild to induce any hallucination on the participants. Including it in the blindfold condition is more for its suggestibility; participants who have given a high rating for the low frequency stimulus may produce unreliable scores on the other measures as well (Wilkins et al., 1984). Therefore, participants with too high low PG value would be excluded from the analysis.&#13;
Procedure&#13;
Prior to the experiment, the participants were asked to sit in a specific spot where the distance between them and the stimuli was fixed at around 50cm. Then they were given the information sheet and consent form, which contained the information they needed to know in order to proceed with the study. On the consent form, there was a list of questions asking about specific medical conditions including epilepsy, photosensitive epilepsy, neuro and eye surgery, and migraine and migraine with aura. The researchers then confirmed that the participants did not suffer or had suffered from those conditions before the experiment could take place. &#13;
The first phase of the experiment was the pattern glare test that comprised of two blocks, one with the blindfold and the other without. Participants were labelled with a number which was used as their order of participation. Participants with odd numbers had non-blindfold block first, and the ones with even numbers had blindfold block first. The numbering and manipulation of the block presentation were kept unknown from the participants. The blindfold block contained two stimuli presentation, one was the medium spatial frequency (SF), and the other was the high SF. The reason why the low SF one was not included is that it worked as a control in the non-blindfold block, as there is minor to no effect of this stimulus (Braithwaite et al., 2013, 2015). The blindfold wearing was prior to the presentation of the stimuli; thus, participants wore the blindfold for five minutes before the blindfold block.&#13;
After the participants finished viewing each pattern, they would answer the 17 questions about the associated visual distortions. They were asked to read aloud the answers and the answers would be immediately recorded using a computer. There was no break in between each trial, and the participants would keep on viewing the next one once they finished all the questions. &#13;
In between the two stimuli present blocks, the participants were asked to finish the three questionnaires: MUSEQ (Mitchell et al., 2017), CAPS (Bell et al., 2006) and CHI II (Braithwaite et al., in press). It takes approximately 20 minutes to complete the three questionnaires. After the questionnaires are completed, the next block of stimuli was presented with a blindfold or no blindfold respectively. After both of the blocks and the three questionnaires were completed, the debrief sheet was given to participants at the end of the experiment. &#13;
The entire process took about 30 minutes for a native English speaker, for participants who speak English as their second language, the duration took slightly longer, at around 35 to 40 minutes.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1761">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1762">
                <text>data/SPSS.sav&#13;
data/.JASP</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1763">
                <text>Zuo2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1764">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1765">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1766">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1767">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1768">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1769">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1770">
                <text>Jason Braithwaite</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1771">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1772">
                <text>Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1773">
                <text>43 Participants (15 males and 28 females)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1774">
                <text>ANOVA&#13;
Bayesian Analysis&#13;
Correlation&#13;
t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="74" public="1" featured="0">
    <fileContainer>
      <file fileId="28">
        <src>https://www.johnntowse.com/LUSTRE/files/original/817c41573a9c56ee11930d194feca1ef.pdf</src>
        <authentication>fec8027de6e092210eb31aa35a2d4d85</authentication>
      </file>
    </fileContainer>
    <collection collectionId="4">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="183">
                  <text>Focus group</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="184">
                  <text>Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1736">
                <text>The Shock Impact: An investigation of attitudes towards the use of shock tactics in charity advertisements.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1737">
                <text>Victoria Meadows</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1738">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1739">
                <text>While the use of shock has been praised for increasing attention, it has also been shown to cause distress and negatively affect the perception of the organization or brand. The use of shock advertising is increasingly popular in the non-profit sector, with organizations using shocking visual imagery to encourage viewers to take action against a cause or increase donations. This study aimed to deepen our understanding of attitudes held towards the effectiveness of this, and uncover attributes that contribute to this. Based on previous research into the effects of gender on advertisement preferences, we also analysed the opinions of male and female participants to unearth preferences for shocking or non-shocking advertisements. Three focus groups were conducted to collect attitudes towards charity advertisements. Participants were presented with six advertisements, split into three categories of health, animal, and child-based charities, each with one shocking and one non-shocking campaign. To compare genders, one focus group contained only males, one only female, and one mixed. It was found that the effectiveness of shock was perceived as higher for health related causes, lower for children’s charities, and mixed for animal causes. There was a difference between males and females in attitudes towards the use of shock in animal based charities, with females engaging more with the non-shocking advertisement, and males with the shocking. Results from this research improve our knowledge of when and why shock should be used in charity advertisements, how it can be used to target certain audiences.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1740">
                <text>Shock&#13;
Advertising&#13;
Gender&#13;
Charity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1741">
                <text>Participants&#13;
Sixteen participants were used in this study, consisting of students attending Lancaster University, with an age range from 20 to 28 years old. The sample had a majority of native speakers (13), with two Romanian and one Panamanian native speaker (English as second language). Participants were collected through opportunity sampling and took part in the study voluntarily. &#13;
This study received departmental approval before data collection commenced.&#13;
Design&#13;
	The study consisted of three focus groups: one containing only females (FGF), and one of only males (FGM) to examine any differences in attitudes between genders, and one of mixed gender (FG1) in order to assess possible conflicting attitudes within the group. Five students participated in the mixed focus group (three males, two females), five students in the female focus group, and six in the male focus group.&#13;
	Focus groups were conducted in a private room and lasted 40-50 minutes.&#13;
Materials &#13;
	The stimuli presented to participants were of existing advertising campaigns released by non-profit organizations in the United Kingdom and United Sates of America. Three ‘non-shocking’ advertisements and three ‘shocking’ advertisements were chosen, with one centered around health, animal cruelty, and child abuse in both categories (Appendix A).&#13;
	‘Shocking’ advertising has been defined by Dahl and colleagues (2003) as something that violates the social norm, including content that is seen as disgusting, obscene, vulgar, morally offensive, or containing sexual references. Using this definition as a guide, the ‘non-shocking’ advertisements were chosen dependent on the lack of these traits and did not include, for example, references to blood or death, obscene gestures, or violence. Adverts released by the National Society for the Prevention of Cruelty to Children (NSPCC), the National Health Service (NHS), and Battersea Dogs and Cats Home were chosen.&#13;
Again, using this definition, we selected ‘shocking’ advertisements for their inclusion of the following shocking traits outlined by Dahl and colleagues (2003). Barnardo’s children’s charity was chosen for it’s obscene image of a distressed newborn baby with a Methylated Spirit bottle in its mouth. The Public Health Service’s Smoke Free advertisement featuring a cigarette that morphs into bloodied guts and tissue was chosen for its disgusting imagery. Lastly, People for the Ethical Treatment of Animals’ (PETA) ad featuring a dead, skinned animal was chosen due to its use of offensive images of harmed animals. &#13;
These were printed out and presented to the participants on paper so they could have a closer look at the advertisements.&#13;
A discussion guide was created to direct the conversation in the focus groups (Appendix B). The guide was designed so to ensure continuity between the groups as advised by Malhotra (2008), helping to tailor the discussion to the topics of the research aims, while also giving participants the opportunity to express their thoughts freely. Following Goulding’s (1998) guidelines, this discussion guide was flexible, enabling the facilitator to ask further questions in relation to what was brought up in conversation.&#13;
Procedure&#13;
	Participants were seated around a table and had access to refreshments throughout the focus group. They were given time at the beginning to get comfortable and talk with fellow participants. Each participant was given an information sheet (Appendix C) that detailed the aims of the research and what they were expected to do. They were informed that they could ask any questions they wish and had the right to withdraw at any point during or after the focus group. Once they had read the information sheet and understood what they were talking part in, participants signed the consent form (Appendix D) to agree to take part in the study. &#13;
	At this point they were informed that the recording would commence. The discussion guide was followed throughout, firstly introducing the topic area that was being covered by the focus group, and encouraging participants to consider advertising in general. Following this they were asked about specifically charity advertisements and any overall feelings they had towards any they have seen. Participants then discussed the advertisements presented to them. Starting with the non-shocking advertisements, participants had time to view and discuss each advert one at a time, where they were asked about its effectiveness and anything they liked or disliked about them. The definition of ‘shocking’ advertisements was then introduced and the procedure was then repeated with presenting one advertisement at a time. Participants were then asked to compare their thoughts on which advertising tactic they thought was more effective and if there was a difference in this between the types of causes that were being advertised and the action that was being asked of the audience, for example a donation or change in behavior. This was done in the same order throughout the groups to ensure consistency across the groups. Lastly any final thoughts from the group were collected and participants were informed that they could email the investigator with any further thoughts they had if they wished. They were thanked for their participation and given a debrief sheet (Appendix E) containing more information of this research into the topic area as well as the contact details of the researcher and supervisor. &#13;
	The recording was then transcribed, and analysed thematically through the use of NVivo qualitative data analysis software, to highlight common themes throughout all three focus groups. This enabled us to compare attitudes held towards the varying types of advertising campaigns, their causes, and any differences between genders.&#13;
Analysis &#13;
	The transcript for each focus group was entered into NVivo (QSR International Pty Ltd. Version 12, 2017) in preparation for thematic analysis. This was designed to uncover themes throughout the focus groups in a systematic way, identifying patterns found in the opinions of the participants. In order to accurately analyse the data, the thematic guidelines proposed by Braun and Clarke (2006) were followed. The transcripts were firstly read thoroughly to ensure a level of familiarity with the conversations. They were then coded in NVivo according to their content through an inductive approach, forming codes from the data at present as opposed to attempting to fit pre-existing framework by past theories, therefore allowing us to broaden our inclusion of the attitudes recorded. The data collected in these codes were sorted into potential themes, ensuring consistency within and variation between the themes. These themes were then re-analysed, making sure they were reflective of the data collected. The final themes were then decided upon.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1742">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1743">
                <text>Text/nvivo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1744">
                <text>Meadows2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1745">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1746">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1747">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1748">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1749">
                <text>Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1750">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1751">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1752">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1753">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1754">
                <text>16 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1755">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="73" public="1" featured="0">
    <fileContainer>
      <file fileId="27">
        <src>https://www.johnntowse.com/LUSTRE/files/original/bf76a14844c88c1dd0ef4939b32360b5.doc</src>
        <authentication>17cb6888200979eb0f30dccb705d3150</authentication>
      </file>
    </fileContainer>
    <collection collectionId="9">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="499">
                  <text>Behavioural observations</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="500">
                  <text>Project focusing on observation of behaviours.&#13;
Includes infant habituation studies</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1716">
                <text>The Effect of Systematic Variance in Action Capabilities on Grasp Ability Perception.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1717">
                <text>Megan Rose Readman </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1718">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1719">
                <text>The ecological approach to visual perception asserts that individuals perceive environments relative to the possibility of action within their environment. Hence, to successfully interact with one’s environment, individuals must be able to accurately perceive the extent over which actions can be performed, widely referred to as action boundaries. Furthermore, as the world in which we inhabit is continually changing and subsequently placing various constraints upon ones action boundaries, it is necessary for individuals to be able to update their perceived action boundaries to accommodate for such variance. While research has displayed that individuals can update their perceptions to accommodate for variance, what is unclear in these circumstances is which action boundary does the perceptual system calibrate to. This study investigated this by analysing the effect of systematic variance on perceived grasp ability in virtual reality. Participants provided estimates of grasp ability following motor experience grasping with either a small, normal, large or a varied size hand. In the variance condition, participants experienced the small hand 25% of the time, the normal hand 25% of the time, and the large hand 50% of the time. The results indicated that participants’ perception of grasp ability reflected the artificial manipulation such that grasp ability was largest in the large hand condition. In addition, regarding the variable condition participants took all visual information into consideration however erred on the side of caution. However, it may be that factors such as age and personality influenced the results. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1720">
                <text>Embodied perception&#13;
Grasp ability&#13;
 Affordance perception&#13;
Virtual Reality</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1721">
                <text>Open Science Framework (OSF)&#13;
This study has been pre-registered with the OSF; See https://osf.io/zkjdt/ for the main OSF project page. The following study deviated from the pre-registration in that data collection occurred for 12 days longer than initially intended as participant uptake was not as high as initially assumed it would be.  &#13;
Participants&#13;
30 Lancaster University Students (5 males and 25 females) aged between 18-26 (Mage = 21.07, SDage = 1.17), naïve to the purpose of this study, participated. All participants were recruited via opportunity sampling, utilising the Lancaster University Sona research participation system, advertisements and the researcher’s social network, and were paid £5 for their participation. Of these participants 29 were right-handed, and one was mixed-handed. The one mixed-handed participant elected to complete the study with their right hand, therefore, the following conclusions and data should be treated as all right-handed participants. In addition, all participants had normal or corrected-to-normal vision and had no known medical history of visual atypicalties, beyond being long or short-sighted, motoric or rheumatologic difficulties. All participants provided informed consent. Lancaster University Research Ethics Committee granted ethical approval for this study. &#13;
Stimuli and apparatus &#13;
A virtual environment was developed in Unity 3D© Gaming Engine with the Leap Motion plugin. The 3D VR colour display comprised a 3D model of a room in which a table was located in the centre. Upon this table were either two grey dots (in the calibration trials; See Panel A of Figure 2) or a grey block (block size manipulation trials; See Panel B Figure 2). The participant’s viewed the VR from a first-person perspective reflecting their natural eye-height. The environment was presented to participants through an Oculus Rift CV1 HMD, which displayed the stereoscopic reality at 2160×1200 at 90Hz split over both displays (Binstock, 2015). &#13;
The movement of the head was tracked by the head mounted display (HMD) and updated in real-time as the participant looked around the environment. Furthermore, the location of the hand was tracked in real-time, using the Leap Motion hand-tracking sensor mounted on to the Oculus Rift CV1 HMD, and was mapped onto the virtual hand thereby causing the virtual hand to move in correspondence with the natural hand.&#13;
&#13;
Procedure &#13;
Each participant was required to attend one testing session, which lasted approximately 30 minutes in duration. Prior to the commencement of the study, full information regarding the requirements of the study was provided by means of a written information sheet. This information sheet was supplemented with a verbal explanation and an opportunity to ask questions. Once full understanding of the study requirements was established, participants provided informed consent and were reminded of their right to withdraw. Following the attainment of consent, participants were required to complete a simple demographic questionnaire notably detailing the participant’s age, sex, hand dominance, and the presence of ocular atypicalities and motoric or rheumatologic difficulties. Critically, at this time the grasp that the participants are required to visualise employing during the perceptual task was defined and demonstrated. This grasp was defined as the ability to place their thumb on one edge of the block and extend their hand over the surface of the block and place one of their fingers on the parallel edge of the block.  &#13;
Following this participants were required to don the oculus rift HMD with attached Leap Motion Sensor and complete four experimental conditions, the order of completion was randomly counterbalanced across participants. The four experimental conditions were the constricted grasp condition, the normal grasp condition, the extended grasp condition and the systematically varied grasp condition. In the constricted grasp condition participants gained motor experience with a virtual hand that was 50% of the size of their actual hand, therefore constricting the grasp to 50% of the normal grasp ability. In the normal grasp condition participants gained motor experience with a virtual hand reflecting the true size of their actual hand, therefore grasp ability was 100% of their normal grasp ability. In the extended grasp condition participants gained motor experience with a hand that was 150 % of the size of their actual hand thereby extending their grasp ability 50% beyond normal grasp ability. Whilst in the systematically varied grasp ability condition the participants experienced the constricted hand size 25% of the time, the normal hand size 25% of the time and the extended hand size 50% of the time. &#13;
Each experiential condition consisted of two phases: the calibration phase and the block size manipulation phase. The calibration phase consisted of 30 trials in which participants viewed the virtual display comprising of a table upon which two grey dots, one to the left and one to the right, were located (See Panel A Figure 2). The inclusion of a calibration phase occurred to provide the participants with the necessary amount of synchronous visual motor information to subsequently induce the illusion that the virtual hand is the participant’s hand (Kilteni et al., 2012). The engagement of this illusion is critical because if the participants do not employ this illusion, the subsequent results will not accurately reflect the study manipulations. In addition, the calibration phase provided participants with visual and motor experience regarding the action boundary associated with the virtual hand.&#13;
Completion of the calibration phase required participants to touch the leftmost dot with the leftmost digit of their dominant hand and the rightmost dot with the rightmost digit again of their dominant hand. Participants were informed that it was ok if they could not reach the dot so long as they performed the action. After the participants had performed the action touching both dots, the two dots disappeared and reappeared in a different location on the table. The location of the dots and the distance between the dots was randomly varied across all 30 trials. However, the distance away from the participants that the dots appeared was maintained throughout as dictated by the Z coordinate in the study script. &#13;
On completion of the calibration phase participants were instructed to place both their hands on their lap, this occurred so that the hand was out of range of the Leap Motion Sensor and hence the virtual hand was not visible in the virtual reality. At this time the virtual reality display was altered so that that the participant viewed the display of the table upon which there was a white block located (See Panel B Figure 2). Once the new display was presented the researcher placed the participant’s hand, they had just completed the calibration phase with on the right and left arrow keys of a standardised QWERTY keyboard. Participants were then instructed to imagine that they were going to grasp the block, employing the previously demonstrated grasp, and manipulate the size of the block to reflect the maximum size they believe they would be able to grasp with their dominant hand using the right and left keys. Each button press altered the size of the block by 1cm. Once the participant was happy that the block reflected the maximum size they could grasp with their dominant hand the researcher saved the final size and presented another block. This phase consisted of eight trials, in four of which the block started small at 3cm and the remaining four the block started large at 20cm. This occurred in order to control for the potential influence previous perception has on later judgements, a phenomenon commonly known as hysteresis (Poltoratski &amp; Tong, 2014)&#13;
On completion of both the calibration and block size manipulation phases for each four conditions participants were given a short verbal debrief regarding the true aims and theoretical underpinning of the study and an opportunity to ask any questions. To supplement this verbal debrief participants were also provided with a written debrief again documenting the aims and theory of the study and contact details for the lead researcher. &#13;
The subsequent raw data obtained included eight maximum grasp block size estimates, four relating to the block that started at 3cm and four relating to the block that started at 20cm, for each experimental condition; small hand size, normal hand size, large hand size and variable hand size. Therefore 32 estimates were obtained from each participant.&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1722">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1723">
                <text>data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1724">
                <text>Readman2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1725">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1726">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1727">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1728">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1729">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1730">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1731">
                <text>Dr Sally A. Linkenauger</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1732">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1733">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1734">
                <text>30 Lancaster University Student (5 males and 25 females)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1735">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="72" public="1" featured="0">
    <fileContainer>
      <file fileId="26">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e3ec9a2d7e322ed9e9f3c933eeb1c0f7.pdf</src>
        <authentication>78a4d22dd75eb0dea0177eebcfb5a978</authentication>
      </file>
      <file fileId="64">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c6ac751946b14a68bfe4f2d19f12bd28.csv</src>
        <authentication>09531ca3ced3d74db868d28231d85358</authentication>
      </file>
      <file fileId="65">
        <src>https://www.johnntowse.com/LUSTRE/files/original/745d3a46a3075a757a76127c26b40b88.csv</src>
        <authentication>2a37d3b9b0dc8eee14572e5989f5e5b9</authentication>
      </file>
      <file fileId="66">
        <src>https://www.johnntowse.com/LUSTRE/files/original/eacb3785916f054ab50505311197fa3e.csv</src>
        <authentication>7ce9e588ce7f7c8b54751f5459edfc25</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1696">
                <text>The effects of ambient temperature on aggressive cognitions&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1697">
                <text>Melissa Barclay</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1698">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1699">
                <text>The world is getting warmer and it is of interest to researchers to explore how changes in temperature experience affect human behaviour. The heat hypothesis suggests that an increase in heat is associated with an increase in antisocial behaviour (e.g. violence, aggression). However social embodiment studies have also demonstrated hotter temperatures to be associated with less antisocial behaviour (e.g. greater gift giving). This study investigated whether higher ambient temperatures are associated with more or less antisocial responding using a controlled laboratory approach. Participants were placed into either a cold room or a hot room whilst they completed two tasks that implicitly measured the accessibility of aggressive cognitions. Using a combination of linear mixed effects analyses and regression analyses, the results demonstrated that there was no significant difference between the two temperature conditions concerning the accessibility of aggressive cognitions in a lexical decision go/no-go task and a word fragment completion task. Consequently the heat hypothesis and theories based upon a social embodiment framework were not supported in this case. Possible alternative explanations and limitations of the study are discussed regarding the inconsistent results to that proposed by particular theoretical frameworks and illustrated in previous research. Directions for future research are suggested in light of the present findings. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1700">
                <text>Ambient, temperature, aggression&#13;
&#13;
Linear mixed effects modelling, regression, correlation&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1701">
                <text>Participants&#13;
	In total, 65 participants took part in this study. Unfortunately the preregistered sample size figure of 120 participants was unable to be reached due to recruitment limitations. Participants were recruited via Lancaster University’s SONA system, via adverts, were friends of the researcher or were recruited on an opportunistic basis around the Lancaster University campus. As a reward for participating, participants were entered into a prize draw to win one of 12 £10 Amazon vouchers.  	Participants were excluded if they met any of several a priori agreed upon rules for exclusion: (a) non-native English speaker, or (b) made a connection in the debrief section between the room temperature and aggression measurements. Three participants were excluded from the analyses on this basis. Therefore 62 participants data remained in the analyses. Demographic information was obtained using questions on the Qualtrics survey (Qualtrics, Provo, UT). The mean age of participants was 25.29 years (SD = 8.83; 43 female, 19 male). It was preregistered that participants must be between 18 and 55 years of age however due to the prospect of increasing the sample size, the age range was increased to 18 to 60 years of age. Participants were randomly assigned to the cold condition (n = 31) or the hot condition (n = 31).&#13;
&#13;
Materials&#13;
	Lexical decision go/no-go task. A lexical decision go/no-go task was used to gauge the accessibility of aggressive cognitions. The standard lexical decision task (LDT) is an indirect measure of semantic activation of specific constructs (e.g. aggression) and is an excellent method to assess the activation of such semantic networks (Marsh &amp; Landau, 1995; see Parrott, Zeichner &amp; Evces, 2005). Advantageously, as the task does not require conscious expression, it is not easily affected by demand characteristics (see Greitemeyer &amp; Osswald, 2011). The LDT task was used in conjunction with a go/no-go response whereby participants are instructed to respond as quickly as possible to a word (alike to the LDT) but to withhold any response if the presented stimulus is a nonword. The lexical decision go/no-go task has been demonstrated to be an excellent alternative to the standard LDT but also measures performance in a similar manner (Perea, Rosa &amp; Gomez, 2002). Essentially, network activation is measured by the response latency with which participants respond to particular stimulus words, with faster reaction times (RTs) demonstrating more accessibility of the target construct (i.e. aggression) (Forster &amp; Davis, 1984; Johnson &amp; Hasher, 1987; Schacter, 1987; Morton, 1970). Specifically, faster RTs to aggressive words by participants in the hot condition, compared to the cold condition, would suggest that the construct of aggression is more accessible in hotter conditions. &#13;
	The lexical decision go/no-go task included the presentation of one hundred letter strings; 25 of which were aggressive-related words (e.g., gun), 25 of which were nonaggressive words (e.g., leaf) and 50 of which were nonword letter strings (e.g., breaff). The aggressive-related words were taken from Anderson, Carnagey &amp; Eubanks (2003) and Johnson (2012). The non-aggressive items were extracted from Anderson et al. (2003) or chosen by the experimenter. Three independent raters who were blind to the study aims assessed the nonaggressive and aggressive words to determine if they were appropriately determined as nonaggressive or aggressive respectively. Fleiss Kappa demonstrated perfect agreement between the three individuals judgments, κ = 1, p &lt; .0001, indicating that the raters agreed that all items coded as aggressive or nonaggressive were appropriately coded as such. Nonword letter strings took the form of pseudowords to prevent the possibility of participants classifying the words by a simple surface analysis of substrings. To illustrate this, a letter string consisting of “xx” can be quickly and easily recognised as a nonword without in-depth processing because no valid English words contain “xx” (see Bösche, 2010). &#13;
	Furthermore, research has demonstrated that more frequent words (e.g. Perea et al., 2002) and shorter words are responded to quicker (e.g. Spieler &amp; Balota, 2000). Given this, the word frequency of each real word (i.e., aggressive-related words and nonaggressive words) was obtained using the SUBLECT database (Van Heuven, Mandera, Keuleers, &amp; Brysbaert, 2014) and the word type categories were matched on word length. According to Welch's t-test, there was no significant difference between the aggressive related words and nonaggressive words in terms of word frequency, (t (40) = 1.64, p = .12), and word length, (t (48) = 0, p = 1). Together this reduces the effect that word length and frequency might have on response latencies.  &#13;
	In the lexical decision go/no-go task, participants were instructed to respond by pressing the ‘spacebar’ key on the keyboard when presented with a valid English word (i.e. go response) however to withhold any response if presented with a nonword (i.e. no-go response). The experimental trials consisted of 50 real word letter strings and 50 nonword letter string trials. The onset of each trial was marked by a plus sign (+), which acted as a fixation point for the participant. After a 1000ms latency, the fixation point was replaced by a letter string. This stimulus item disappeared after a latency of 3000ms and was followed by the next fixation point and then the next letter string was presented automatically in the same aforementioned fashion. The presentation and randomization of letter strings, and the recording of response latencies were controlled by JavaScript code running on Qualtrics. &#13;
&#13;
	Word Fragment Completion (WFC) Task. To measure the activation of aggressive thoughts participants also completed a WFC task consisting of 50 word fragments (adapted from Anderson et al., 2003). Using Qualtrics, participants filled the blanks with letters to form a valid English word within a five-minute timeframe. Of the 50 word fragments, 25 could be completed to form either a nonaggressive word or aggressive word (e.g., “ki__” could be completed with “kill” or “kite”). The other 25 word fragments could be completed with only nonaggressive words. Only the word fragments with possible aggressive-completions were used in the analyses. The remaining 25 fragments were used as a decoy to ensure that participants would not guess that aggression was being measured. If a word could not be completed, participants were required to leave the answer box blank. This task is a valid measure of aggressive cognitions (Anderson et al., 2003). The outcome variable of aggressive cognitions was calculated by dividing the number of word fragments that were completed as aggressive words by the total number of word-fragment completions that could be completed as aggressive. Fragments were presented in a randomized fashion for each participant controlled by Qualtrics. &#13;
&#13;
	Baseline Temperature Comfort. A measure of baseline temperature comfort level was also included in the Qualtrics survey, which measured how cold or hot the participant generally feels. A rating scale of -50 to +50 measured this, where a higher score indicates a feeling of generally hotter. Many aspects can affect an individual’s thermal perception and comfort ranging from physical to cultural aspects (Laskari, et al., 2017; see for e.g. Djamila, 2017). For example, body temperature deviations can have their roots in physiology such as age (Castle, Norman, Yeh, Miller &amp; Yoshikawa, 1991). These factors vary across individuals, raising the possibility that individuals have baseline temperatures or comfort levels that differ systematically from the average population (Obermeyer, Samra, &amp; Mullainathan, 2017). In other words, the same temperature that is normal for one person might be cold for another. Given this, variations in an individuals subjective measure of baseline temperature comfort will be explored to see whether this moderates temperature effects on aggressive cognitions. &#13;
	&#13;
	Outside Temperature. A measure of outside temperature was not originally planned and its inclusion was not preregistered. However data from the local weather station was used to calculate outside temperature during each testing session. Overall, the mean outside temperature was 18.6°C (SD = 2.91) and ranged from 12.6 to 22.9°.&#13;
&#13;
Procedure and Design &#13;
 	Participants were welcomed into either the cold or hot room depending on their random allocation. The room temperature reading before each testing session began was recorded, which demonstrated that the range of temperatures for all sessions was at 15.5-16.9°C (M = 16.14, SD = 0.39) and 27.8–29.8°C (M = 28.56, SD = 0.60) for the cold and hot condition respectively. The heat-controlled room consisted of five workplaces equipped with conventional PCs allowing for simultaneous data collection from five participants at one time. Participants were separated from each other by partitions between the workstations. Whilst at a workstation, participants received the study information and gave their consent to participate. They then completed four decision making tasks using the Qualtrics survey software, two of which measured the accessibility of aggressive thoughts (i.e. lexical decision go/no-go task and WFC task) and two of which measured cognitive ability (as part of another students MSc project). All instructions concerning the tasks were given via the computer. The four tasks were presented in a randomised order between participants by Qualtrics to reduce order effects (e.g. participants may be tired for tasks at the end) and carryover effects (e.g. earlier tasks may influence behaviour on subsequent tasks) (see Shaughnessy, Zechmeister &amp; Zechmeister, 2006). &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1702">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1703">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1704">
                <text>Barclay2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1705">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1706">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1707">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1708">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1709">
                <text> Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1710">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1711">
                <text>Dermot Lynott</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1712">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1713">
                <text>Cognitive Psychology&#13;
Social Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1714">
                <text>65 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1715">
                <text>Confirmatory Analysis&#13;
Exploratory Analysis&#13;
Regression Analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
