<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=5" accessDate="2026-05-03T01:46:24+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>5</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="158" public="1" featured="0">
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3220">
                <text>Exploring the Effectiveness of Metaphors in Video Advertising - the Interaction Effect of Different Cultural Groups and Different Metaphors </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3221">
                <text>Lesley Wu</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3222">
                <text>7th September 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3223">
                <text>Metaphors are often used in contemporary advertising, and previous research has confirmed that advertisements with metaphors are more effective than literal ones. At the same time, research into the role of metaphors has become more comprehensive, moving from traditional metaphor theories based solely on literal language to the study of the interactive effects of different modalities of metaphor (multimodal metaphor). The aim of this study was to understand the differences in the responses of different cultural groups when exposed to advertisements containing different types of metaphors (needs-highlighting metaphor vs. feature-highlighting metaphor). Based on this expectation, a 2 (cultures: British, Chinese) x 3 (advertisement types: feature-highlighting metaphors, needs-highlighting metaphors, and literal advertisements) designed experiment was conducted to test.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3224">
                <text>Marketing&#13;
Psycholinguistics</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3225">
                <text>Design &#13;
To obtain statistics on the extent to which creative metaphor in video advertising contribute to the effectiveness of advertisement, a quantitative research method was used in this study. To test if there was an interaction effect between cultural group and metaphor types, this experiment had a 3×2 mixed design, with a within-subjects factor of advertisement type (feature-highlighting metaphors, needs-highlighting metaphors, and literal advertisements), and a between-subjects factor of participants’ culture (British and Chinese). The dependent variables were attitude toward the product/advert and purchase intentions.  &#13;
Participants &#13;
Fifty-three participants were recruited through convenience sampling and participated in the study by completing an online survey. The responses obtained from participants who either did not complete the consent form or did not answer all the questions were excluded from the analyses. This led to a total of 40 responses retained: 20 from Westerners, 10 men and 10 women; 20 from Chinese participants, 9 men and 11 women. Table 1 provides an overview of the participants’ information in this experiment. Most of the participants are currently studying at Lancaster University, some of the Chinese participants are currently living in China. As the aim of the experiment was to look at cultural differences, therefore, no specific age restrictions were set.   &#13;
Materials &#13;
In the current experiment, the selection of stimulus classification conditions was based on the setting of Pan's study in 2020. However, in order to investigate the pattern and consistency of people's responses under different conditions, the number of stimuli under each condition classification was larger in this experiment. The stimuli consist of 9 video ads in total: 3 ads each for the literal, feature-highlighting metaphor, and needs-highlighting metaphor conditions. All ads featured tangible products: perfume, body wash and deodorant, with 3 ads per product covering all 3 conditions. The experimental manipulation was based on the metaphorical dimension of the advertisements. Table 2 provides an overview of advertisement conditions and the links to view them.  &#13;
The metaphor condition contained at least one metaphor in the stimuli, while the literal advertisement was used as a control condition. The length of the selected advertisements was controlled to be less than 120 seconds (about 2 minutes). Advertisements that have been created in recent years were chosen, between 2012 and 2021.  &#13;
Phau and Prendergast' study (2000) found that consumers associated the image of a brand with the image of its country of origin. In order to minimise the influence of consumers' previous perceptions of brand image, the advertisements chosen for this experiment were made for well-known brands, whose countries of origin were all developed countries, such as the USA, the UK and Japan. &#13;
Advertisements created from different countries were chosen; therefore, the language of the original advertisements were Chinese, English and Japanese. All advertisements were translated into Chinese and English with subtitles, which were checked by native Chinese speakers with undergraduate degrees in Japanese translation and English translation. As the video exceeds the size of the attachments that could be added to the Qualtrics questionnaire, the video advertisements with bilingual subtitles were uploaded to OneDrive and the link was added to the questionnaire for participants to view. All selected video advertisements were sourced from internet platforms. &#13;
To measure attitudes toward the ad and purchase intentions, questions were formulated based on questions previously used in marketing research (Jeong, 2008; Kim, Baek &amp; Choi, 2012; Pan, 2020). &#13;
Attitudes towards advertisement. Participants were asked to rate/evaluate the ad on 4 scales, i.e., to what extent they agreed that the ad is ‘good’, ‘favourable’, ‘pleasant’, and ‘appealing’; the scales ranged from 1 (Strongly disagree) to 7 (Strongly agree) (Jeong, 2008).  &#13;
Purchase intentions. Participants were asked to rate the value of the item being promoted, the probability of purchasing the promoted product, and the probability of recommending the products to their family or friends (Maheswaran &amp; Meyers-Levy, 1990). &#13;
The original questions above were in English and were translated into Chinese for Chinese participants who took part in this study. The translations were checked for equivalence of meaning by a native Chinese translator researcher in English. Variables and measures in this study are provided in Table 3. &#13;
 &#13;
Procedure &#13;
All ethical guidelines related to data collection, and informed consent were reviewed and approved by the Faculty of Science and Technology Research Ethics Committee at Lancaster University. The data collected were anonymised upon extraction from Qualtrics. no participant information beyond the critical data is included. &#13;
All participants were asked to complete an online questionnaire. They could access the survey either via a QR code or via the shared link from Qualtrics. The questionnaire was set up on Qualtrics in English and Chinese versions. The first section included a participant information sheet and the consent form, followed by the experimental section.  &#13;
In this section, each video advertisement and the corresponding questions were grouped into a separate question block, each with a link to a specific advertisement for participants to view. This was to make sure participants focus on watching and evaluating one advertisement at a time. To move to the next block, participants had to complete the question evaluating the current video and press a button to access the next question block. Participants rated the properties of each advertisement immediately following exposure to it. The order of ads presented was fully randomised and differed for each participant. To prevent participants' overall liking of the advertised brand, product or brand spokesperson from influencing their assessment of each attribute of the advertisement and obtain valid data,  participants were reminded in each question block of the cautions for rating the advertisement itself with a sentence, "If you have any knowledge of the brands or products, please try to rate the following ads, by excluding your liking of them (including the celebrity spokesperson) and your current purchasing needs. " At last, participants clicked the submit button, and were debriefed and thanked for their participation. The study took approximately 40 minutes and participants were paid £6.50 for their time.  &#13;
Statistical analysis &#13;
The data was examined and analysed using SPSS software. A two-way mixed ANOVA (analysis of variance) was used to examine the two independent variables, i.e., advertisement condition, within-participants, with 3 levels (needs-highlighting metaphor, feature-highlighting metaphor, literal) and culture, between-participants, with 2 groups (Chinese, British), and their effects on two dependent variables, i.e., attitude towards the advertisement, and purchase intentions.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3226">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3227">
                <text>SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3228">
                <text>Wu2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3229">
                <text>Chrisie Pullin</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3230">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3231">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3232">
                <text>English and Chinese</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3233">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3234">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3235">
                <text>Francesca Citron</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3236">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3237">
                <text>Marketing&#13;
Psycholinguistics</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3238">
                <text>40</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3239">
                <text>Mixed ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="156" public="1" featured="0">
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3205">
                <text> Dr Megan Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3206">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3207">
                <text>Neuro-clinical psychology </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3208">
                <text>20</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3209">
                <text>T-test and regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="155" public="1" featured="0">
    <fileContainer>
      <file fileId="161">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8154f97af93267514bfb20a6c3f3ef81.doc</src>
        <authentication>d960205f74b85b3da78afddb4fda542d</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3180">
                <text>Farmer and Non-Farmer Attitudes towards Alternative Animal Products</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3181">
                <text>Chloe Crawshaw</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3182">
                <text>23/09/22</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3183">
                <text>Farmers’ livelihoods and way of living could be argued to be under threat from the simultaneous rapid rise of plant-based products, development of cultured products, and our growing understanding of the detrimental impact of traditional animal agriculture. Little research has investigated farmers attitudes towards cultured and plant-based products. Furthermore, famers appear to have limited awareness of these animal product alternatives. This study presented 45 omnivorous farmers and 53 omnivorous non-farmers with information about plant-based burgers, cultured burgers, plant-based milk, and cultured milk. Product acceptance and COM-B facilitators and barriers were explored. Farmers were less accepting of all alternative products than non-farmers, suggesting that their vested interest in the continuation of traditional animal agriculture affected their attitudes towards alternative products. Closer inspection of farmer acceptance suggests that personal investment in animal agriculture also led to differences within farmers, with occupational farmers being less accepting of the products than the members of farming families. The findings are interpreted using the Transtheoretical Model to suggest that regarding the adoption of alternative products, occupational farmers appear to be in the rejection stage, whereas members of farming families appear to be in the contemplation stage. As occupational farmers had more negative attitudes towards the alternative products, they appear more likely to consider the alternatives a threat to their livelihood.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3184">
                <text>farmers, plant-based alternatives, cultured products, COM-B Model, Transtheoretical Model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3185">
                <text>Participant Recruitment and Exclusions&#13;
Participant recruitment followed a pre-registered plan (https://aspredicted.org/blind.php?x=QL3_H96). Between July and August 2022 two groups of participants were recruited: adults with experience of livestock farming (Farmers), and a comparison group of adults without experience of livestock farming (Non-Farmers). Farmers make up a very small percentage (0.2%) of the UK population (DEFA, 2021) so we included current farmers, retired farmers, farm workers, and members of farming families.  &#13;
Fifty-five livestock farmers predominately living in Gloucestershire were recruited using snowball sampling. Farmers that were known to the author were first contacted via telephone, social media, or visited in-person. Interested participants were provided with the URL link to the questionnaire, a brief description of the study, and a request to forward the information to other individuals in the farming community. Individuals without internet access received a paper copy of the questionnaire. &#13;
Sixty-one non-farmers were recruited through snowball sampling in the same method as for farmers. As farmers are typically older males (DEFRA, 2019), we attempted to match the ages of the non farmers to the farmers and effort was taken to recruit female farmers and members of farming families. Our recruitment plan was to recruit a minimum of 40 participants per group. To qualify for the study, farmers and non-farmers had to be omnivores. &#13;
A further 23 farmers and 10 non-farmers were recruited using Prolific by pre-screening for those in the ‘Agriculture, Food, and Natural Resources’ employment sector, the description of the study also encouraged participation among those with “experience of working with farmed animals.” &#13;
A total of 130 participants consented to participate: 55 farmers, 61 non-farmers, and a further fourteen who were excluded as they did not reach the demographics section so could not be classified into a group. Following our preregistered exclusion criteria, 18 participants who reported dietary restrictions were excluded (10 Farmers and 8 Non-Farmers). The final sample consisted of 45 Farmers and 53 Non-Farmers. &#13;
Design and Procedure &#13;
A 2x4 mixed design was used, with Group as a between-subjects factor with two levels: Farmer and Non-Farmer, and Product type as a within-subjects factors with four levels: plant-based burgers, cultured beef burgers, plant-based milk, and cultured cow’s milk. Participants completed an online questionnaire on Qualtrics (Qualtrics, 2005) that “drew attention to existing and emerging food innovations and explored beliefs and attitudes towards these products “, see Appendix A. The questionnaire took approximately 15 minutes. &#13;
Ethical Statement &#13;
The study was approved by Lancaster University’s Department of Psychological Ethics Committee. Participation was anonymous and Farmers were not asked to disclose the name or location of their farm. All participants gave their informed consent before accessing the questionnaire. On completion of the questionnaire, participants were debriefed, reminded of their right to withdraw their data, and were thanked.&#13;
Materials&#13;
	The questionnaire comprised of six sections: vignettes, product acceptance, facilitators and barriers to product acceptance, consumer behaviour, demographics, and farming information.  &#13;
Vignettes&#13;
Participants were presented with a brief description of factory farming, including its prevalence in the UK and the negative consequence on farmed animals and the environment. See Appendix B for full vignette details and references. Factory farming was chosen as it is the main method of farming in the UK (FAIRR, 2016). Participants were then presented with brief descriptions of plant-based products and methods of creating cultured animal products. Product features were compared against traditional animal products, including the sensory qualities, nutritional content, animal involvement, and environmental impact. Using a similar table to Van Loo et al. (2020), participants were presented with a comparison of the relative environmental impact of a plant-based soya burger and a cultured beef burger compared to a factory farmed beef burger</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3186">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3187">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3188">
                <text>Crawshaw2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3189">
                <text>HanYi Wang&#13;
Amie Suthers</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3190">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3191">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3192">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3193">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3194">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3210">
                <text>Dr Jared Piazza</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3211">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3212">
                <text>Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3213">
                <text>98(45 Farmers and 53 Non-Farmers)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3214">
                <text>Chi-squared&#13;
Correlation&#13;
Kruskall-Wallis, MANOVA, Wilcoxon Signed Rank, Mann-Whitney U </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="154" public="1" featured="0">
    <fileContainer>
      <file fileId="172">
        <src>https://www.johnntowse.com/LUSTRE/files/original/3ddc0d86634b8437530ec3352beb2ebc.pdf</src>
        <authentication>1ad80421bc21a8ecbaac8b6704bb657f</authentication>
      </file>
    </fileContainer>
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3165">
                <text>Levodopa and antisaccade performance in Parkinson’s disease: the influence of intrinsic dopaminergic functioning, dopamine agonists and chronic anti-parkinsonian medication </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3166">
                <text>Amy Austin</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3167">
                <text>14th September 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3168">
                <text>The antisaccade (AS) task is a validated eye-tracking paradigm primarily used to assess response inhibition. Although several studies have established AS error rate and latency to be increased in Parkinson’s disease (PD), the evidence regarding the effect of existing anti-parkinsonian medication (e.g., levodopa) on these parameters is contradictory. According to the dopamine overdose hypothesis (DOH), the effect of levodopa on AS performance should be dependent upon the intrinsic dopaminergic functioning of the individual. The current study is the first study to use spontaneous eye blink rate (SEBR), a proxy measure for dopamine activity, to investigate the influence of intrinsic dopaminergic functioning on AS performance following levodopa consumption. The influence of additional PD related factors was also examined. SEBR and AS performance was assessed in eleven healthy controls (HC) and nine participants with PD. SEBR and AS performance was assessed twice in participants with PD, once 30 minutes prior to, and once one hour after, the consumption of levodopa. Pre-levodopa consumption SEBR was a significant positive predictor of AS error rate post, but not pre, levodopa consumption. Total years consuming anti-parkinsonian medications was positively predictive of AS error rate both pre and post levodopa consumption. The regular consumption of dopamine agonists was found to significantly predict fewer AS errors following the consumption of levodopa. The current results support the DOH; higher intrinsic dopaminergic functioning was associated with increased AS errors following the artificial stimulation of dopamine via by levodopa. Therefore, artificial dopaminergic stimulation of an intrinsically sufficiently functioning dopaminergic system appears to produce an overstimulation/overdose effect whereby consequential detrimental effects on AS performance/response inhibition are observed. The current findings go some way in explaining the inconsistencies within the literature. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3169">
                <text>Keywords: Parkinson’s disease, dopamine overdose hypothesis, spontaneous eye blink rate, levodopa, dopamine agonists, antisaccade </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3170">
                <text>Twenty-one participants, 10 individuals with mild-moderate idiopathic PD (Mage = 67.10, SDage = 8.63) and 11 healthy control older adults of comparable age (HC; Mage = 66.82, SDage = 9.09) were recruited to the study. The mean age of recruited HC and PD individuals did not differ significantly, t (18.95) = - 0.07, p = .943). Participants were recruited via established research databases and via the social network of the researcher. As the current study focused on PD, participants with a diagnosis of any neurological conditions (beyond PD) were excluded. Additionally, as depression and anxiety influence an individual’s saccadic performance profile and SEBR (Jazbec et al., 2005; Mackintosh et al., 1983), individuals who obtained a clinically moderate depression or anxiety score, as measured by the Hospital Anxiety and Depression scale (HADS), were excluded. Similarly, mild cognitive impairment (MCI) and dementia are associated with increased AS error rate and AS latency (Opwonya et al., 2022), and increased SEBR (D’Antonio et al., 2021). As such, those who presented a cognitive profile indicative of MCI/dementia (score &lt; 82 on the Addenbrookes Cognitive Exam-III, ACE-III; Hsieh et al., 2013) were excluded from the current study. Finally, as experimental stimuli in the current study were coloured red and green, individuals with red-green colour vision deficiency, detected via the Ishihara test (Ishihara, 1917) were also excluded. &#13;
On these grounds of exclusion, one individual with PD was excluded from the current study due to obtaining an ACE-III score indicative of MCI. Subsequently, nine individuals with mild-moderate idiopathic PD (Mage = 65.89, SDage = 8.21) and eleven HC individuals (Mage = 66.82, SDage = 9.09) participated in the study. All participants had normal or corrected to normal vision. &#13;
All participants with PD were classified as Hohen and Yahr stage II or below (Hoehn &amp; Yahr, 1998), indicating they were physically independent and capable of completing all study tasks. At the time of testing, all PD participants were receiving anti-parkinsonian medication (see table 2 for PD sample anti-parkinsonian medication summary). All PD participants were tested under their normal medication regime, that is, participants attended the study 30 minutes prior to the consumption of their next, normally scheduled, dosage of levodopa-based medication. Accordingly, measures were obtained both pre (30 minutes prior) and post (1 hour after) levodopa consumption, permitting the respective investigations of pre and post levodopa consumption SEBR, motor symptom severity, AS performance and PS performance. &#13;
An online calculator computed the levodopa equivalent daily dosages (LEDD) for each participant with PD. LEDD indicates the equivalent amount of levodopa an individual receives from all anti-parkinsonian medications across a 24-hour window (Julien et al., 2021). The online calculator can be accessed via: https://www.parkinsonsmeasurement.org/toolBox/levodopaEquivalentDose.htm &#13;
Materials and measures &#13;
Online questionnaire &#13;
A questionnaire comprised of a demographics and health screening survey, the Edinburgh handedness inventory (EHI), the HADS, and a PD and associated medication &#13;
survey was developed and distributed via Qualtrics (Qualtrics, 2013). The questionnaire required 15 minutes to complete. &#13;
Demographics and health screening survey. Participants were asked to disclose key demographic and health information (e.g., age, sex, whether they had normal or corrected to normal vision). Participants were also asked to disclose any history of visual impairments, neurological conditions (beyond PD), psychiatric illness, or rheumatic illness. &#13;
The EHI (Oldfield, 1971). The EHI is a highly reliable (r = .97, p &lt; .001; Oldfield, 1971) and internally consistent (a = 0.88; Oldfield, 1971) self-report measure of an individual’s hand dominance (Edlin et al., 2015). Participants are requested to indicate their typical hand preference, via five-point Likert scales ranging from ‘always left’- ‘always right’, when completing a range of daily activities (e.g., writing). A final score of ≥ 50 indicates right hand dominance, &lt; 50 to &gt; −50 indicates ambidexterity, and ≤−50 indicates left hand dominance. As hand dominance typically corresponds to ocular dominance (McManus et al., 1999), the EHI was used to infer the dominant eye of each participant in the current study. Monocular eye tacking was then conducted on the dominant eye (Ehinger et al., 2019). &#13;
The HADS (Zigmond &amp; Snaith, 1983). The HADS is a short self-assessment questionnaire validated to detect anxiety and depression within the general population, inclusive of the elderly (Bjelland et al., 2002). Respondents are required to indicate, via four- point Likert scales, how 14 items relate to their recent feelings. Responses range from ‘0’ (the item has little relevance to recent feelings), to ‘4’ (the item is significantly representative of recent feelings). Likert responses are summed separately for anxiety and depression relevant items. Scores of seven or less indicate no notable presence of anxiety and depression. Scores ranging between eight and 10 indicate mild levels, between 11 and 14 indicate moderate levels, and between 15 and 21 indicate severe levels. &#13;
PD and associated medication survey. Individuals with PD were asked to disclose further health information regarding the number of years since their PD diagnosis, which anti-parkinsonian medications they were currently receiving, the daily dosages of these medications and the total number of years they had been consuming anti-parkinsonian medications. &#13;
ACE-III (Hsieh et al., 2013) &#13;
The ACE-III is a well validated (Hseih et al., 2013), highly reliable and internally consistent (ICC = 0.92, a = 0.87 respectively; Takenoshita et al., 2019) cognitive assessment used to screen for the presence of MCI and dementia syndromes (Hsieh et al., 2013). To provide a global neuropsychological evaluation, participants are asked to complete tasks assumed to relate to five principal cognitive functions, namely: memory, language, attention, visuospatial skills, and verbal fluency (Hodges &amp; Larner, 2017). Scores ascertained from each of the five domains are summed and the individual receives an overall score relative to the maximum possible score of 100. Higher scores indicate better cognitive functioning. A score below 82 is indicative of cognitive impairment. &#13;
Ishihara colour deficiency test (Ishihara, 1917) &#13;
The Ishihara colour deficiency test is a 38-item assessment of red-green colour perception. Typical red-green colour vision is marked by the ability to correctly decipher a number or pattern embedded within 38 red/green circular images. The test requires three minutes to complete. &#13;
MDS-UPDRS (Goetz et al., 2008) &#13;
Both motor and non-motor PD symptoms were evaluated using the MDS-UPDRS. The MDS-UPDRS is comprised of four distinct subscales. Subscale I focuses on non-motor symptoms associated with PD (e.g., cognitive impairment, dopamine dysregulation syndrome), whereas subscales II – IV focus on the motor symptoms associated with PD. Subscales I, II and IV require participants to retrospectively respond with answers reflecting their average symptoms/experiences over the previous week. Whereas subscale III directly assesses current functioning via a motor exam. The motor examination requires participants to perform a series of motor tasks (e.g., finger tapping, walking, arising from a chair) under the observation of the examiner. The examiner rates the severity of motor impairment displayed during each motor task performed. All subscales of the MDS-UPDRS are scored according to four-point-Likert scales whereby ‘0’ indicates no impairment and ‘4’ indicates the most severe impairment. Hoehn and Yahr (Hoehn &amp; Yahr, 1998) stages were calculated based upon the MDS-UPDRS assessment. The accumulative score of subscales I, II, III and IV provide an overall MDS-UPDRS score indicative of PD severity. A maximum score of 199 is reflective of the most severe disability the result of PD (Holden et al., 2018). The MDS-UPDRS requires approximately 30 minutes to complete. &#13;
SEBR &#13;
SEBR was assessed by recording participant’s eye movements whilst sitting at rest. The recording device was located approximately 55cm directly in front of the participant. Participants were not informed that they were completing an assessment of their blink rate, nor were they engaged into conversation with the examiner as both informing participants that their blink rate is being assessed and conversing increase SEBR (Doughty, 2001). Participants eye movements were recorded for two-and-a-half minutes however, only the last one minute of each recording was coded for SEBR (one minute is sufficiently long enough to obtain a representative blink rate, Deuschl &amp; Goddemeier, 1998). A blink was identified (and coded accordingly) as full eye lid closure which was the result of bilateral movement of the eyelids (Kimber &amp; Thompson, 2000). SEBR was scored as the number of blinks per minute. PD participant pre-levodopa consumption SEBR was considered their baseline SEBR, reflective of intrinsic dopaminergic functioning (Kimber &amp; Thompson, 2000). &#13;
Eye tracking tasks &#13;
Apparatus &#13;
A desktop mounted eye tracker (Eyelink Desktop 1000), operating in monocular mode, with a sampling rate of 500 Hz was used to record eye movements of the participant’s dominant eye. An adjustable chin rest with attached forehead rest was utilized to minimise head movements. The eye tracking camera was located at the base of the stimuli presenting computer monitor. Participants sat approximately 55cm away from the eye tracking camera and computer monitor. A 4-point calibration, whereby participants are asked to fixate upon a red circle as it moves from the top, bottom, right and left side of the computer screen, was used prior to the commencement of all eye tracking tasks. Frequent calibration improves the accuracy of eye-tracking data (Pi &amp; Shi, 2019). All eye tracking tasks were developed and operated using experiment builder software version 1.10.1630. Habitual eye glass wearers were not required to remove their eyeglasses during eye tracking tasks. Eye tracking tasks required approximately 10 minutes to complete. &#13;
Prosaccade task &#13;
Participants completed four practice trials and 16 experimental gap trials. To centre a participant’s gaze at the start of each trial, a white fixation stimulus was presented for 1000 milliseconds (ms) in the centre of a back computer screen. A red lateralised target was then displayed randomly either to the right or the left of the central fixation for 1200ms at 4 ° eccentricity. The PS task operated according to the gap paradigm. Accordingly, to create a temporal gap between fixation and target stimuli, a black interval screen was presented for 200ms between the extinguishing of the white fixation stimulus and the presentation of the red target stimulus. For the PS task, participants were instructed to shift their visual focus towards the location of the red target as quickly and as accurately as possible. &#13;
Antisaccade task &#13;
Participants completed four practice trials followed by 24 experimental gap trials. Participants were presented with a white central fixation stimulus on a black computer screen for 1000ms. Following a 200ms black interval screen, a green lateralised target stimulus was presented at random to either the left or right of the central fixation. The green target was displayed for 2000ms at 4 ° eccentricity. Participants were instructed to shift their visual focus to the opposite direction of where the green target stimulus appeared. An example of a successful trial would be as follows, if the green target stimulus was presented left-lateralised, participants should direct their gaze to the right side of the computer screen. &#13;
Procedure &#13;
The present study was reviewed and approved by Lancaster University’s ethics committee. All participants provided informed consent prior to participating. &#13;
Participants were tested on one day and testing sessions took no longer than two hours. Individuals with PD completed SEBR assessments, MDS-UPDRS III motor examinations and all eye tracking tasks twice, once 30 minutes prior to consuming their usually scheduled dosage of levodopa medication, and once again one hour following the consumption of their levodopa medication. Prior research indicates that one hour is sufficient for levodopa to be metabolized and produce therapeutic effects (Lu et al., 2019). This method of testing the effect of anti-parkinsonian medications is widely used within the literature and no detrimental effects of this method have been reported (Cools et al., 2003). Similarly, re- test on the PS and AS tasks does not significantly influence performance (Larrison-Faucher et al., 2004). HC participants completed all study tasks once. &#13;
All participants completed the online questionnaire 48 hours prior to attending testing sessions. Upon arriving to testing, all participants completed an assessment of SEBR followed by the PS and the AS tasks. HC participants then completed the ACE-III and the Ishihara test. HC participation in the study was then complete. PD participants continued with further testing. Specifically, PD participants then completed the MDS-UPDRS subscale III motor examination. PD participants then consumed their usual dose of levodopa medication at their usual time. During the one-hour levodopa metabolization period, participants with PD completed subscales I, II and IV of the MDS-UPDRS, the ACE-III and the Ishihara test. &#13;
Once one hour had elapsed, individuals with PD then re-completed an assessment of SEBR, the PS and the AS task, and were also re-assessed via the MDS-UPDRS subscale III motor examination. Thus, motor symptom severity (MDS-UPDRSIII), SEBR and eye- tracking data were obtained from both pre (baseline) and post levodopa consumption medication states. &#13;
Data processing &#13;
Raw data were extracted via EyeLink using DataViewer Software Version 3.2 and processed offline using the bespoke software SaccadeMachine (Mardanbegi et al., 2019). SaccadeMachine removes noise and spikes within the data; frames with a velocity signal greater than 1500 deg/s or with an acceleration signal greater than 100,000deg2/sec are filtered out. Fixations and saccadic events were detected via the EyeLink Parser. Trials were excluded where participants failed to direct their gaze to the central fixation stimulus. To ensure saccadic data were reflective of responses to target presentation, a temporal window of 80-700ms from the initial onset of the target stimulus was used (i.e., anticipatory saccades produced prior to 80ms, and excessively delayed saccades produced after 700ms were excluded). The following variables were extracted from the processed data: PS latency (the time taken between the onset of the target stimulus and the first correct fixation), PS error rate (the number of times the participant failed to generate a reflexive saccade to fixate upon the target stimulus), AS latency (the time taken between the onset of the target stimulus and the first correct fixation in the opposite direction to the target stimulus), AS error rate (the number of times a participant erroneously performed a reflexive PS towards the novel target stimulus instead of looking away). </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3171">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3172">
                <text>Data/R.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3173">
                <text>Austin 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3174">
                <text>Rachel Jordan&#13;
Sian Reid</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3175">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3176">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3177">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3178">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3179">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3403">
                <text>Dr Megan Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3404">
                <text>Msc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3405">
                <text>Neuro-clinical psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3406">
                <text>20 (9 individuals with mild-moderate Parkinson's disease, 11 healthy control individuals of similar age)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3407">
                <text>Regression, T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="153" public="1" featured="0">
    <fileContainer>
      <file fileId="169">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c32bb813b138e5706ec76bb2e9c3a7b3.doc</src>
        <authentication>f4062334d78cf5f0c54a8646bfb0feb2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3150">
                <text>Grasping Ability in Virtual Reality: Effects of Eating Disorders on Perceptions of Action Capabilities</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3151">
                <text>Siri Sudhakar</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3152">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3153">
                <text>Knowledge of one’s body size is vital to be able to accurately judge an object’s size. For example, knowing the length of your arm is crucial to estimating the maximum distance reachable. Accurate perception of action capabilities is the result of a healthy mental body representation at a conscious and implicit level. This ability to use one’s mental body representation in action perception is assumed to be distorted in individuals with eating disorders (ED). However, unlike prior research, this study will be investigating both the effect of body image and schema distortion on action capabilities. Thus, this study will assess whether the ability to update one’s perception of their action capabilities in response to morphological changes is altered in individuals with EDs. The experiment had participants (N = 20) embody small (50% of hand size), normal, and large (150% of hand size) avatar hands (in virtual reality) and then estimate the maximum size of a box graspable. The size of the box, beginning as either large or small across all three conditions, was manipulated to observe haptic perception in participants. We found that individuals with ED showed similar estimates despite embodying different hand sizes alluding to their inability to successfully update their haptic perceptions. Low interoceptive awareness and body image disturbances were the root cause of this perceptional flaw in eating-disordered individuals. Treatment focused on improving the altered IA and implicit distortions in body schema could improve haptic perception in ED individuals.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3154">
                <text>Action Capability, Eating Disorder, Interoceptive Awareness</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3155">
                <text>A priori power analysis was conducted through the G*Power software (Faul et al., 2007) to determine the sample size required to achieve adequate power (N = 30). The required power (1- β) was set at .80 and the significance level (α) was set to .05. Based on Readman et al. (2021), who used the same methodology as this study, we anticipated a large&#13;
effect size of 0.9. This was deduced as this study obtained a ηp2 of .49 with a sample of N =30. For the frequentist parameters defined, a sample size of N = 3 is required to achieve a power of .80 at an alpha of .05.&#13;
EDs are also notoriously variable. Given that previous studies using similar methodologies have typically recruited between 20-30 participants (Readman et al., 2020; Lin et al., 2020), we elected to recruit 30 participants (15 per condition). However, this study was only able to recruit 23 participants in total.&#13;
22 participants from Lancaster and Lancaster University (seven males, 15 females) aged between 18-30 (Mage = 21.73, SDage = 1.98) participated in this study. Two participants were removed due to being extreme outliers resulting in the present dataset (N = 20; Mage = 21.65, SDage = 2.06).&#13;
Amongst participants of this study, seven participants disclosed a diagnosis of ED. In accordance with the revised Edinburgh Handedness Inventory (R-EHI) classification system (Milenkovic &amp; Dragovic, 2013), the majority of the participants (N = 19) were right-handed, with only one participant being left-handed. Borderline to high levels of anxiety, as measured through the Hospital Anxiety and Depression Scale (HADS; Stern, 2014), was observed in 16 participants, while seven participants showed similar levels of depression.&#13;
Eating Disorder Inventory (EDI): Participants with ED were also asked to complete the EDI. It is a self-report questionnaire that can assess the presence and level (depending on the estimate) of AN, BN, and Binge Eating Disorder (BED) (Augestad and Flanders, 2002). It consists of 64 items, with eight subscales measuring dimensions such as drive of thinness, body dissatisfaction, perfectionism, interpersonal distrust, and IA (Garner, Olmstead, &amp; Polivy, 1983; Vinai et al., 2016; Santangelo et al., 2022). Seven participants had ED while the remaining formed the healthy control group.​&#13;
Design&#13;
This study includes variables in a 2 (Between factor: Group – Control vs. ED) x 3 (Within: Hand size – small vs. normal vs. large) factorial design. The dependent variable (DV) is the grasping ability, and the independent values are the groups involved and the hand size conditions. All participants of each group experienced all conditions of the hand size. The order of condition completion was randomised across participants through use of a Latin square method. Such counterbalancing allows for the control of confounding/extraneous variables and diminishes order and sequence effects, improving internal validity (Corriero, 2017).&#13;
Stimuli and Apparatus&#13;
Participants were seated an arm’s length away from the front of a standardized table. Unity 3D© Gaming Engine with the Leap motion Plugin was used to create a virtual environment in 3D VR colour. Participants were able to view this environment through an Oculus Rift CV1 Head Mounted Display (HMD). The HMD displayed the stereoscopic environment at 2,160 × 1,200 at 90 Hz split (Binstock, 2015). Head and hand movements were tracked in real-time by the HMD and the Leap motion hand-tracking sensor attached to the HMD.&#13;
The HMD ensured that the participants’ perspective was updated in real-time. Hand movements were updated in accordance with the virtual hand that was mapped onto the participant’s natural hands. Leap Motion for Unity provided assets such as avatar hands based on actual human hands. The virtual environment was visible to the participants in a first-person perspective adjusted to their height. The VR display is comprised of a model room, with a table located in the middle. Upon this table were either two white dots (Calibration trials) or a white box (Test trails).&#13;
 &#13;
 &#13;
Questionnaires&#13;
Revised Edinburgh Handedness Inventory (R-EHI). Participants’ handedness was deduced using the R-EHI. The modified version of the inventory was used as it accounted for and improved the inconsistencies and validity compared to the past questionnaire (Milenkovic &amp; Dragovic, 2013). Participants are estimated on handedness depending on their preferences of either hand for doing activities such as writing, drawing, throwing a ball, etc.&#13;
Hospital Anxiety and Depression Scale (HADS). The HADS questionnaire was also provided to all participants to assess the presence of borderline or abnormal levels of anxiety and depression in them. It is a quick questionnaire consisting of seven questions each for anxiety and depression, with both being scored separately (Stern, 2014).&#13;
Procedure&#13;
Participation in this study took up to an hour of the participant’s time. It was conducted in the Whewell Building of Lancaster University. Participants were recruited partly through opportunity sampling, and advertisements. All participants received £5 for their contribution to this study. All participants were native English speakers, had normal or corrected vision, and had no motor difficulties. Participants provided informed consent through a consent form signed before the onset of the study. They were also provided a debrief sheet and were verbally debriefed at the end of the experiment.&#13;
The methodology of this study mirrors that of Readman et al. (2021). The experiment was conducted in a virtual environment (VE) through a VR device. The inclusion of VR allows for controlled changes to grasping ability, with responses collected similar to how an individual would act in the real world (Normand et al., 2011). Moreover, the inclusion of VR enabled interactions with the morphologically altered virtual body in real-time, and in a similar physical environment through the immersive system built through the head-mounted displays (HMD) and motion sensors (Gan et al., 2021).&#13;
Participants completed the R-EHI, EDI, and HADS questionnaires before beginning the experiment. Participants were asked to don the HMD and introduced to the virtual environment through a brief demonstration. They were given approximately 5 minutes to explore the environment, to familiarise themselves with the immersive VR experience and ensure no undue effects occur. Participants completed three experimental conditions: Normal hand size, constricted hand size (50% of their hand size), and extended hand size (150% of their hand size). Each condition consisted of calibration and test trials.&#13;
Calibration trials. Participants were presented with the virtual table upon which two horizontally spaced dots were located. Using their dominant hand, participants were asked to touch the left-most dot with their left-most digit and then touch the right-most dot with the right-most digit of their dominant hand. This occurred for 30 trials to ensure that the participant has habituated to the virtual hand.&#13;
Test trials. The participants were instructed to place their hands behind their backs, out of sight. The Leap Motion sensor was then temporarily paused to ensure that the virtual hands are not visible to the participants. On ensuring this position, participants were then presented with a block in the VE, that they had to envision they could grasp with their dominant hand from above. The size of the block was manipulated, making it either larger or smaller, with each alteration causing 1cm changes. The participant was asked to tell the researcher when the block reflects the maximum size that they would be able to grasp. The final size was saved before the participant was presented with another block.&#13;
Grasping was defined to participants as the ability to place their thumb on one edge of the block and extend their hand over the surface of the block and place one of their fingers on the parallel edge of the block. This grasp was also demonstrated to participants. Participants completed four test trials; in two test trials, the block started small (0.03 cm) and was made larger. In the remaining two trials the block started large (0.20 cm) and was made smaller. This was done to omit the hysteresis effect, which would cause prior visual stimuli to influence later perception (Poltoratski &amp; Tong, 2014). Therefore, four grasp-ability estimates were obtained for each experimental condition.&#13;
This study received ethical approval from Lancaster University Psychology department.&#13;
 &#13;
Data Analysis&#13;
An Analysis of Variance (ANOVA) is a statistical model used to examine differences in means (Rucci &amp; Tweney, 1980). The present dataset contains both between-subjects (group) and within-subjects (hand size) factors. Thus, a mixed ANOVA would allow us to compare these variables and the means of the groups they are cross classified with.&#13;
This is a two-way analysis as there are two independent variables (group and hand size) but only one DV (grasping ability estimate). Analysis through ANOVA is appropriate for this dataset as the effect of both variables in this study can be studied on the response estimate (Field, 2009). This study aims to establish the effect of group and hand size on grasping ability (GA). Therefore, a mixed ANOVA would help us identify the significant effect of either factor on the GA estimate and examine their interaction effect. Results of the mixed ANOVA analysis would help assess whether individuals with ED do update to changes in morphology.&#13;
Data Preparation&#13;
The present dataset combined demographic, physicality, and questionnaires related (EDI, R-EHI, HADS) information and GA estimates across the hand size conditions (small vs normal vs large). GA estimate of each condition was further sub-categorized into whether the box started large or small with four trials each. Averages of these four trials for the small starting box and large starting box for each condition was taken forming the mean grasp-ability estimates (cm).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3156">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3157">
                <text>Data/excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3158">
                <text>SUDHAKAR2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3159">
                <text>Alexia Hockett &#13;
Romina Ghaleh Joujahri</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3160">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3161">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3162">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3163">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3164">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3255">
                <text>Dr. Megan Rose Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3256">
                <text>MSc </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3257">
                <text>Cognitive, Perception </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3258">
                <text>20</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3259">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="151" public="1" featured="0">
    <fileContainer>
      <file fileId="150">
        <src>https://www.johnntowse.com/LUSTRE/files/original/54ff2b32ca6ddc076571e720c7f80444.pdf</src>
        <authentication>1c7c86c045532986fdad17219d9d6e82</authentication>
      </file>
      <file fileId="151">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6ee62233e0839f9c2766d58b4b93b348.pdf</src>
        <authentication>1c7c86c045532986fdad17219d9d6e82</authentication>
      </file>
      <file fileId="152">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6bb01a175bd17e9527b8e3c400460fb2.pdf</src>
        <authentication>1c7c86c045532986fdad17219d9d6e82</authentication>
      </file>
    </fileContainer>
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3115">
                <text>Eye tracking and Attention Deficit Hyperactivity Disorder (ADHD): Can eye tracking identify the feigning of ADHD?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3116">
                <text>Reva Maria George </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3117">
                <text>7/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3118">
                <text>When diagnosing adult ADHD, it has proven difficult for clinicians to detect deceptive behaviour. Diagnosis of ADHD comes with economic, academic, and recreational benefits, which may account for the increasing feigning of the disorder. Current diagnostic methods: clinical interviews and self-report scales can be easily manipulated for a positive diagnosis. Hence the present study evaluated the utility of eye tracking devices to detect the feigning of ADHD. Eye movements of 38 participants (7 ADHD, 15 healthy controls, and 16 healthy feigners) were captured throughout the prosaccade and anti-saccade task. The performance of the participants on the task was evaluated in terms of latency and the percentage of error rate. The findings of the study reveal a significant difference in the latency of anti-saccade tasks i.e., feigners have an increased latency compared to healthy controls and ADHD participants. Because of the limited sample size, study findings cannot be generalized. Further investigations are needed with a much larger sample.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3119">
                <text>Eye-tracking, ADHD, Feigning, Prosaccade task, Anti-saccade task, latency, error rate, eye movements</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3120">
                <text>Method&#13;
Participants &#13;
 Previous studies explaining feigning in ADHD acquired data from around 90-100 samples (Booksh et.al., 2010; Frazier et.al., 2008; Harrison et.al., 2007). The study therefore aimed to recruit 90 participants, 30 each in ADHD, healthy controls, and healthy feigners faking the disorder. Participants with and without a clinical diagnosis of ADHD were selected using the opportunity sampling method. A total of 42 participants between the age of 18-35 volunteered and were recruited for the study through the university disability service (11%), posters (16%) and through word of mouth (73%). Data of two participants were removed as the eye tracker repeatedly lost the pupil during recording. All participants were rewarded with an equal chance to win one of 6 £25 vouchers. Thirty-one of the 42 participants were healthy younger adult controls. Of the healthy control participants 15 (7 females; Mage = 24.33; SDage=4.32) participated as healthy controls, and the remaining 16 (9 females; Mage = 24.25; SDage=1.88) as healthy feigners. Seven ADHD participants (6 females) with a mean age 22.71 (SD=2.22) completed the study. The severity of the ADHD symptoms was analysed using the Adult ADHD self-report scale (for more demographic details see Table 1). The exclusion criteria include participants: 1) with any visual (other than corrected-to-normal vision) impairment 2) with any cognitive impairment 3) with additional diagnosis of neurological conditions 4) without a proper clinical diagnosis of ADHD. The exclusion criteria were applied because these impairments may interfere with the participants performance in the task.  &#13;
Prior to data analysis, one of the participants was removed from ADHD group due to the lack of proper clinical diagnosis. Furthermore, a control participant was excluded with the assumption of having a probable mild cognitive impairment because the individual scored less than 82 (cut-off) in the Addenbrooke’s Cognitive Examination-III (ACE-III) (see Table 1 for further demographic details). &#13;
Participants &#13;
 Previous studies explaining feigning in ADHD acquired data from around 90-100 samples (Booksh et.al., 2010; Frazier et.al., 2008; Harrison et.al., 2007). The study therefore aimed to recruit 90 participants, 30 each in ADHD, healthy controls, and healthy feigners faking the disorder. Participants with and without a clinical diagnosis of ADHD were selected using the opportunity sampling method. A total of 42 participants between the age of 18-35 volunteered and were recruited for the study through the university disability service (11%), posters (16%) and through word of mouth (73%). Data of two participants were removed as the eye tracker repeatedly lost the pupil during recording. All participants were rewarded with an equal chance to win one of 6 £25 vouchers. Thirty-one of the 42 participants were healthy younger adult controls. Of the healthy control participants 15 (7 females; Mage = 24.33; SDage=4.32) participated as healthy controls, and the remaining 16 (9 females; Mage = 24.25; SDage=1.88) as healthy feigners. Seven ADHD participants (6 females) with a mean age 22.71 (SD=2.22) completed the study. The severity of the ADHD symptoms was analysed using the Adult ADHD self-report scale (for more demographic details see Table 1). The exclusion criteria include participants: 1) with any visual (other than corrected-to-normal vision) impairment 2) with any cognitive impairment 3) with additional diagnosis of neurological conditions 4) without a proper clinical diagnosis of ADHD. The exclusion criteria were applied because these impairments may interfere with the participants performance in the task.  &#13;
Prior to data analysis, one of the participants was removed from ADHD group due to the lack of proper clinical diagnosis. Furthermore, a control participant was excluded with the assumption of having a probable mild cognitive impairment because the individual scored less than 82 (cut-off) in the Addenbrooke’s Cognitive Examination-III (ACE-III) &#13;
Stimuli and Apparatus &#13;
Addenbrooke’s Cognitive Examination-III (ACE-III) &#13;
The ACE-III, developed by Hodges et.al, is an extended cognitive screening technique. The items of the test produce 5 sub-scores totalling 100, with each sub-score corresponding to a different cognitive domain, such as attention (18 points), memory (26 points), verbal fluency (14 points), language (26 points), and visuospatial skills (16 points) (Noone, 2015). Higher scores indicate superior cognitive functioning within the given domain. The validated cut-off point for normal cognitive functioning is 82/100, therefore individuals who yield a total score of &lt; 82 are assumed to have probable mild cognitive impairment. The ACE-III has proven reliability (α= 0.88), sensitivity (0.93), specificity (1.0) and concurrent validity with alternative cognitive assessments such as the ACE-R (r= 0.99, p &lt; 0.01; Hsieh, 2013).  &#13;
Ishihara Colour blindness test &#13;
Ishihara colour blindness developed by Dr Shinobu Ishihara, was used to assess the colour vision deficiency of congenital origin, particularly red-green deficiency (Ishihara, 2011). It consists of 24 coloured plates containing a circle of dots with random colours and numbers. Each plate includes primary and secondary colour dots, with the primary colours appearing in patterns or numbers, while secondary colours appear as the background (Shaygannejad et.al., 2012). Plates 1–15 were utilised because of the fact that the main goal was to separate the colour defects from the normal colour appreciation simply. The participants were instructed to read out the numbers aloud, without more than three seconds' delay. A participant with an error in reading the numbers of two or more plates were considered to be having an impaired colour vision. &#13;
Royal Air Force (RAF) ruler &#13;
The RAF near point rule is a 50cm long square rule with a cheek rest and slider holding a revolving four-sided cube. One of the 4 sides has a vertical line with a central dot for convergence fixation. It is used for determining the near point of convergence (NPC) (Sharma, 2017). The participant is instructed to keep a direct gaze on the dot while the slider descends and to report when the dot's image breaks into two. The cut-off point for NPC break and NPC recovery is between 5 and 7 cm respectively (Pang et.al., 2010) &#13;
Adult ADHD Self Report Scale (ASRS-v1.1; Kessler et al., 2005) &#13;
The severity of ADHD symptoms presented by individuals with ADHD was assessed using the ASRS. The ASRS is an 18-item checklist, developed by the World Health Organization (WHO) work group together with the WHO World Mental Health (WMH) Survey Initiative (Kessler et al., 2005), to screen ADHD in adult patients. Completion of the ASRS requires participants to indicate how much they agree that the given statement relates to their behaviour over the past 6 months. The questions are divided into 2 parts: part A and part B. Part A contains 6 questions that are indicative of symptoms consistent with ADHD and are used for screening purposes. A score of 4 or above denotes symptoms typical with ADHD. The final 12 questions in Part B provide a more detailed breakdown of the specific symptoms an individual is presenting. The scale has high concurrent validity, and the internal consistency of the scale Cronbach’s α was found to be 0.88 (Adler et.al., 2006).&#13;
Hospital Anxiety and Depression Scale (HADS) &#13;
Hospital Anxiety and Depression Scale was developed by Zigmond and Snaith in 1983. It is a 14-item measure, used to detect the psychological distress of the participants (Zigmond &amp; Snaith, 1983). Seven of the items measure anxiety (HADS-A), while the remaining seven measures the depressive symptoms (HADS-D). For each item, the participant is asked to indicate on a four-point scale the degree to which they feel a given statement relates to how they were feeling for the past week. The overall score for both anxiety and depression is 21. A score of 0-7 represents “normal”, 8-10 indicates “mild”, 11-14 “moderate and 15-21 indicates “severe” (Pais-Ribeiro et al., 2018). The scale is reliable and valid in measuring symptoms in both general and psychiatric patients (Bjelland et.al., 2002).  &#13;
&#13;
Eye-Tracking Measurement &#13;
Participants eye movements were recorded via the EyeLink Desktop 1000 at 500Hz. To minimize the head movements, a chin rest was used. Participants were seated approximately 55cm from the computer monitor (monitor run at 60 Hz). All the stimuli used for the study were created and controlled using Experiment Builder Software Version 1.10.1630. Two different computers are used for the eye-tracking system: a host PC which tracks the eye movements and determines their actual gaze positions and a display computer which shows the stimuli during the calibration and experimental trial.  &#13;
Calibration  &#13;
Prior to presenting the experimental stimuli participants completed a 4-point calibration to ensure the eye tracker was accurately tracking their eyes. During this trial, the participant will be asked to follow a red dot that will move to the four edges of a +.  &#13;
Prosaccade task &#13;
Participants were asked to complete 16 gap trials as quickly and accurately as possible. At first the participants were instructed to look at a fixation point to centre their gaze. It was a white target displayed at the centre of the screen for 1000ms. Then they were told to focus on the appearing the red lateralised target, presented randomly to the left or right of the screen at 4° (visual angle) for 1200ms. The temporal gap in stimuli presentation is due to a 200ms blank interval screen which was displayed between the fading of the white fixation stimuli and the initial appearance of the red target.  &#13;
Anti-saccade task &#13;
For anti-saccade task, the participants completed 24 gap trials with 4 practice trials. They were asked to look at the central white fixation presented for 1000ms before shifting their gaze and attentional focus to the opposite side of the screen from where the green target appeared. The green lateralised target was displayed randomly to the left or right side of the screen at 4° (visual angle) for 2000ms. There was a 200ms blank interval screen as a gap in between the fixation point and the target. &#13;
 Procedure &#13;
The study was approved by the Lancaster University Psychology Department Ethics Committee. Prior to study commencement healthy younger adult volunteers were randomly to either the healthy control or healthy feigner (asked to feign ADHD) group. All individuals with a formal clinical diagnosis of ADHD were assigned to the ADHD group. &#13;
The participants were required to visit the lab in order to participate. Before commencing the study, the participants provided informed consent. After taking the required demographic data, participants were then screened for the probable presence of mild cognitive impairment using the ACE-III. They were also screened for any visual impairments using the RAF rule and Ishihara colour blindness test. Then, the participants were asked to complete the HADS, to screen for any psychological distress. Additionally, the ADHD participants were asked to complete the ASRS questionnaire, to determine the severity of the disorder. &#13;
 On completion of the pre-study questionnaires, participants will be provided with Task information leaflet.  &#13;
At this time control and ADHD participants were presented with a vignette (Appendix B) detailing an individual trying to feign ADHD. Comparatively, those assigned to the feigning condition were presented with a vignette (Appendix C) that explained the symptoms of ADHD and were asked to imagine themselves in a situation where they were to feign ADHD. All participants were then asked to complete the two eye movement tasks and the associated calibration trials. Fundamentally, at this time healthy controls and those with ADHD were asked to complete the tasks honestly to the best of their ability. In comparison, those in the feigning condition were asked to complete these tasks whilst pretending to have ADHD (without any over-exaggeration). On completion of the tasks, all participants were informed that they will be entered into a lottery to win a £25 and were provided with a debrief sheet (Appendix H), which explains the details of the study.  &#13;
Data Analysis &#13;
DataViewer Software Version 3.2 was used to extract and analyse the raw EyeLink data. The data was then analysed online using a bespoke software SaccadeMachine. With the software spikes and noise were removed by filtering out frames with a velocity signal greater than 1,500 deg/s or with an acceleration signal greater than 100,000 deg 2 /sec. Fixations and saccadic events were identified using the EyeLink Parser, and the saccades were extracted alongside multiple temporal and spatial variables. Trials were eliminated when the participant did not direct their gaze on to the central fixation. The temporal window of 80-700ms used and measured from the onset of the target display. Anticipatory saccades made prior to 80ms, and excessively delayed saccades made after 700ms were removed. The data thus formed consists of the latency and error rate. Latency is the time taken of the correct trial whereas the error rate is the percentage of trials the participant got wrong. Data of one individual participant from the control group was removed as their ACE score was low suggesting the probable presence of mild cognitive impairment. Due to the lack of a formal diagnosis, data of an ADHD participant was removed.  &#13;
All data was then assessed to ensure it met the assumptions required for statistical analysis. First, all data was assessed for the presence of any outliers (+/- 2SD). This analysis revealed there were 3 outliers for the both the pro- and anti-saccade measures. Given that these outliers may skew the subsequent analysis, all outliers were removed. The subsequent data was then checked to ensure it met the assumptions of normality. It was found that the prosaccade latency satisfied the normality condition (see Figure 1), hence one-way ANOVA was applied to investigate the difference in latency across the groups. As the data for prosaccade error rate was skewed (see Figure 2), Kruskal-Wallis H Test was used to determine the difference in data across the groups. Removing the outliers gave a data which satisfied normality condition for both anti-saccade latency (see Figure 3) and error rate (see Figure 4). Hence one-way ANOVA was used to test the difference for both the data across the groups and a post hoc Tukey’s Honest Significant Difference test was used to determine the significance of the difference in anti-saccade latency. &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3121">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3122">
                <text>SPSS.sav for results&#13;
Word.doc for demographic and data acquistion form&#13;
PDF for consent form</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3123">
                <text>George_2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3124">
                <text>Lettie and Delyth</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3125">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3126">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3127">
                <text>Data and Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3128">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3134">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3195">
                <text>Dr Megan Rose Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3196">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3197">
                <text>Clinical&#13;
&#13;
Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3198">
                <text>38</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3199">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="150" public="1" featured="0">
    <fileContainer>
      <file fileId="153">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2a6af9e3bd67966c26821868b9693304.pdf</src>
        <authentication>7822a912e947086abb3415b7484d575b</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3102">
                <text>Facts May Care About Your Feelings:  The Effects of Empirical and Anecdotal Evidence in the Perception of Climate Change </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3103">
                <text>Constance Jordan-Turner</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3104">
                <text>21/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3105">
                <text>Although the effects of humanmade climate change become ever more potent, the consensus gap between climate scientists and the public is as wide as ever. It is critical that climate change communication is improved to try and close this gap. There are several strategies that can be implemented, including using anecdotes alongside or instead of empirical evidence to elicit emotions. In this study, 74 members of the public completed a survey.  Participants were randomly assigned to one of four conditions which dictated the type of evidence they received: no evidence, empirical evidence, anecdotal evidence, or both empirical and anecdotal evidence.  Results suggest that, in general, there was no effect of evidence on participants’ perceptions of climate change. This result held even after controlling for worldview and ideology. These findings have implications for the theory of inserting emotion into climate change communication.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3106">
                <text>Climate change, communication, perception, emotion, evidence</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3107">
                <text>Participants and design&#13;
There were 74 participants (26 male; 46 female; one non-binary; one preferred not to say). The mean age of the participants was 37.99 (SD = 16.93). Participants were recruited via advertising the study on the researcher’s social media accounts (Facebook and Instagram) using a standardised advertisement (see Appendix A) and through word of mouth. Participants were all members of the general public. The study manipulated two independent variables in a between-participants design: anecdotal evidence (without-anecdotal vs. with-anecdotal) and empirical evidence (without-empirical vs. with empirical), resulting in four conditions. Participants were randomly allocated to one of the four conditions, subject to the constraint of equal cell numbers. &#13;
&#13;
This study gained ethical approval from the Faculty of Science and Technology Research Ethics Committee.&#13;
Participants and design&#13;
There were 74 participants (26 male; 46 female; one non-binary; one preferred not to say). The mean age of the participants was 37.99 (SD = 16.93). Participants were recruited via advertising the study on the researcher’s social media accounts (Facebook and Instagram) using a standardised advertisement (see Appendix A) and through word of mouth. Participants were all members of the general public. The study manipulated two independent variables in a between-participants design: anecdotal evidence (without-anecdotal vs. with-anecdotal) and empirical evidence (without-empirical vs. with empirical), resulting in four conditions. Participants were randomly allocated to one of the four conditions, subject to the constraint of equal cell numbers. &#13;
Evidence Passages&#13;
Empirical Evidence&#13;
The empirical evidence vignette included a statement explaining that human-induced carbon dioxide emissions and global average temperature have synchronously increased since pre-industrial times, accompanied with graphs demonstrating these upward trends.  The vignette also highlighted the scientific consensus that humanmade climate change is occurring and will have adverse consequences. Finally, the vignette explained that these adverse consequences had already begun to materialise.  The increase of extreme weather events was highlighted in a graph that showed the tripling of weather-related disasters between 1980 and 2010.  Finally, the vignette finished with references for the information it contained (see Appendix B).&#13;
Anecdotal Evidence&#13;
The anecdotal evidence vignette contained information about Storms Dudley, Eunice and Franklin which all made landfall in Britain in quick succession in 2022. The storms were a weather-related event that some scientists have linked to climate change (Barrett, 2022); Specifically, the vignette included information about the storms’ destructiveness, such as the cost of the damage they caused, and the number of people killed.  The destructiveness of the storms was highlighted with images of damage and flooding in Wells, Otley, and Brentwood, as well as an image from Blackpool demonstrating the height and power of the waves caused by the storms.  The vignette included a stock image of a man standing in a flooded living room and a short passage outlining the experience of a fictitious character named Matt Johnson whose family home had been severely flooded as a result of the storms. The vignette concluded with a statement from climate scientist Robert Klein who argued that the impact of the storm was exacerbated by climate change, which generated “super storm” conditions.  Finally, there was a reference to an article about the storms and their link to climate change (see Appendix C).&#13;
Measures&#13;
Table 1 contains an overview of the measures embedded in the questionnaire.  For the full questionnaire, please refer to Appendix D.&#13;
Disaster Belief&#13;
The disaster belief measure measured predicted estimates of the frequency of weather-related disasters that will occur in the listed years. Participants were given an approximate frequency for 2019 from the International Disaster Database. The measure consisted of six items: 2030, 2040, 2050, 2060, 2070 and 2080. Participants responded by typing in their estimated number next to the relevant year.&#13;
Harm Extent&#13;
The harm extent measure consisted of questions concerning how much harm that participants think climate change will cause themselves, their family, their community, Britain, other countries, and future generations. There were six items, such as ‘How much do you think climate change will harm you?’, and ‘How much do you think climate change will harm people in Britain?’ Responses were rated from (1) ‘not at all’ to (4) ‘a great deal’.&#13;
Harm Timing&#13;
	The harm timing measure consisted of questions concerning when participants thought climate change will cause harm to themselves, their family, their community, Britain, other countries, and future generations. There were only two items, ‘When do you think climate change will begin to harm Britain?’ and ‘When do you think climate change will begin to harm other countries?’. Responses were rated as (1) ‘Never’, (2) ‘100 years’; (3) ‘50 years’; (4) ‘25 years’; (5) ‘10 years’ and (6) ‘Right now’.&#13;
CO2 Attributions&#13;
	The CO2 attributions measure measured how much participants think human carbon dioxide emissions contribute to events such as heatwaves, rising sea levels, flooding, and Storms Dudley, Eunice, and Franklin. There were six items, such as ‘CO2 contribution to the observed increase in atmospheric temperature during the last 130 years’, ‘CO2 contribution to the European heat wave in 2022 that killed over 5,000 people’, and ‘CO2 contribution to storms Dudley, Eunice, and Franklin in the UK (2022)’. These responses were gathered using a sliding scale from 0 to 100%.&#13;
Intention&#13;
The intention measure consisted of questions asking about participants’ pro-environmental intentions. There were seven items. Examples of items include ‘I will take part in an environmental event (e.g., Earth hour)’, ‘I will give money to a group that aims to protect the environment’, and ‘I will switch to products that are more environmentally friendly’. The response options were simply ‘Yes’ or ‘No’.   &#13;
Mitigation&#13;
	The mitigation measure consisted of questions asking about participants’ support for mitigating policies. There were five items. Example items include, ‘Signing an international treaty that requires Britain to cut its carbon dioxide emissions by 90% by 2050’, ‘Adding a surcharge to electrical bills to establish a fund to help make buildings more energy efficient and to teach British citizens how to reduce energy use’, and ‘Providing tax rebates for people who purchase energy-efficient vehicles or solar panels’. Responses were rated from (1) ‘Strongly Oppose’ to (4) ‘Strongly Support’.&#13;
CO2 Adjustment&#13;
	The CO2 adjustment measure measures how much participants think Britain should adjust its CO2 emissions over the next 10 years. There was only one item: ‘How much should Britain adjust CO2 emissions during the next 10 years?’. Responses were rated from (1) ‘Not at all’ to (6) ‘Reduce by 50%’.&#13;
Free-Market Support&#13;
	The free-market support measure consisted of questions asking about participants’ support for the free market. There were five items. Examples items include, ‘An economic system based on free-markets, unrestrained by government interference, automatically works best to meet human needs’ and ‘The preservation of the free-market system is more important than localized environmental concerns’. Two items, ‘Free and unregulated markets pose important threats to sustainable development’ and ‘The free-market system is likely to promote unsustainable consumption’, required reverse coding upon analysis.&#13;
Table 1&#13;
Measures embedded within the questionnaire. The first column contains the name of the measures; the second column contains the instructions on how to respond to items in that measure; and the third column describes how answers to the items were coded.   &#13;
Measure Name	Questions	Coded Response&#13;
Disaster belief	Please provide an estimate of the frequency of weather-related disasters that will occur in each year (6 items).	Participants used the keyboard to type in a number for each year.&#13;
Harm extent	The following items examine your thoughts about the extent of harm that will be caused by climate change (6 items).	4-point scale: (1) ‘Not at all’; (2) ‘A little’; (3) ‘A moderate amount’; (4) ‘A great deal’.&#13;
Harm timing	The following items examine your thoughts about when climate change will begin to cause harm (2 items).	6-point scale: (1) ‘Never’; (2) ‘100 years’; (3) ‘50 years’; (4) ‘25 years’; (5) ‘10 years’; (6) ‘Right now’.&#13;
CO2 attribution	For each of the following questions, please estimate the contribution from human CO2 emissions to cause each event. For example, 0% would mean humans are not at all responsible, whereas 100% would mean that human CO2 emissions are fully responsible&#13;
	Participants used the mouse to place their response on a sliding scale. The sliding scale contained the numbers, ‘0’, ‘10’, ‘20’, ‘30’, ‘40’, ‘50’, ‘60’, ‘70’, ‘80’, ‘90’, and ‘100’. &#13;
&#13;
&#13;
&#13;
Pro-environmental intentions	Please indicate whether or not you will engage in the following actions (7 items).	0 = No&#13;
1 = Yes&#13;
Mitigation	How much do you support or oppose the following policies (five items).  	4-point scale; (1) ‘Strongly Oppose’; (2) ‘Oppose’; (3) ‘Support’; (4) ‘Strongly Support’.&#13;
CO2 adjustment	How much should Britain adjust CO2 emissions during the next 10 years?	6-point scale; (1) ‘Not at all’; (2) ‘Reduce by 10%’; (3) ‘Reduce by 20%’; (4) ‘Reduce by 30%’; (5) ‘Reduce by 40%’; (6) ‘Reduce by 50%’.&#13;
Free-market belief	Please indicate how much you agree with each statement (5 items).	5-point scale: (1) ‘Strongly Disagree’; (2) ‘Disagree’; (3) ‘Neutral’; (4) ‘Agree’; (5) ‘Strongly Agree’.&#13;
Demographic questions	What is your age?	Participants used the keyboard to type in a number.&#13;
	What is your gender?	1 = Male; 2 = Female; 3 = Non-binary; 4 = Other; 5 = Prefer Not to Say&#13;
&#13;
Procedure&#13;
All participants completed a questionnaire assessing their belief in and concern about humanmade climate change and their mitigation beliefs.  The questionnaire was administered online using Qualtrics survey software.  Participants responded to the questionnaire by using either the mouse to select answers or the keyboard to type in numbers. &#13;
At the beginning of the questionnaire, all participants received an information sheet about the aim of the study, the lack of risks associated with participating, and how participant information is stored. Participants were asked to indicate their informed consent. For the full participant information sheet and consent form, please refer to Appendix E. After participants gave their consent and continued onto the survey, they were asked their age and gender. They were then presented with evidence according to the condition they were assigned to.  There were four conditions: no evidence, empirical evidence, anecdotal evidence, and both empirical and anecdotal evidence.&#13;
After they had read one or both evidence passages, participants answered the disaster belief measure. Next, they answered the CO2 attribution measure. Then they answered the harm extent measure and the harm timing measure. After that was the intention measure, and then they answered the mitigation measure. In the final part of the questionnaire, they were asked how much Britain should cut its CO2 emissions over ten years, and then questions on their support for the free market. Participants were then asked demographic questions about their age and gender. Finally, the participants were given a debrief sheet (Appendix F).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3108">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3109">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3110">
                <text>Jordan-Turner2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3111">
                <text>Sacha Crossley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3112">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3113">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3114">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3129">
                <text>Dr. Mark Hurlstone</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3130">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3131">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3132">
                <text>74</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3133">
                <text>ANCOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="149" public="1" featured="0">
    <fileContainer>
      <file fileId="144">
        <src>https://www.johnntowse.com/LUSTRE/files/original/17e340bee54ebac611344515a86f9ff6.pdf</src>
        <authentication>4a222c6141db92dc7ee55aa00fb0d0ce</authentication>
      </file>
      <file fileId="145">
        <src>https://www.johnntowse.com/LUSTRE/files/original/896fd29b37e809eb53d43c14fa1b8eca.zip</src>
        <authentication>a0f3346a973237810f84764261f03f24</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3082">
                <text>Does implicit mentalising involve the representation of others’ mental state content? </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3083">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3084">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3085">
                <text>Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3086">
                <text>implicit mentalising, co-representation, joint action, domain specificity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3087">
                <text>Pre-test: Selection of Suitable Stimuli&#13;
Participants&#13;
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.&#13;
Stimuli and Materials&#13;
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.&#13;
Design and Procedure&#13;
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design. &#13;
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.&#13;
Main Experiment&#13;
Participants&#13;
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.&#13;
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52. &#13;
Stimuli and Materials&#13;
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians. &#13;
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).&#13;
Figure 1&#13;
Example of Stimuli Used in Simon Task &#13;
 &#13;
Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.&#13;
Design and Procedure&#13;
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.&#13;
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen  . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.&#13;
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.&#13;
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.&#13;
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding). &#13;
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).&#13;
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables. &#13;
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.&#13;
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.&#13;
Figure 2&#13;
Inclusion of Other in the Self (IOS) scale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3088">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3089">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3090">
                <text>Wong07092022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3091">
                <text>Malcolm Wong&#13;
Aubrey Covill&#13;
Elisha Moreton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3092">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3093">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3094">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3095">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3096">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3097">
                <text>Dr. Jessica Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3098">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3099">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3100">
                <text>25 in a pre-test, 65 in the main experiment</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3101">
                <text>Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="148" public="1" featured="0">
    <fileContainer>
      <file fileId="146" order="2">
        <src>https://www.johnntowse.com/LUSTRE/files/original/055f608897628d54c7f2a243de72eb63.txt</src>
        <authentication>849ed4bf5f0ebe3ec34bccd7856d6c63</authentication>
      </file>
      <file fileId="147" order="3">
        <src>https://www.johnntowse.com/LUSTRE/files/original/0caf76688d0fd87a937daad8cef0af66.txt</src>
        <authentication>913353fac700af17d02d4381a7540773</authentication>
      </file>
      <file fileId="148" order="4">
        <src>https://www.johnntowse.com/LUSTRE/files/original/3385513f4c4cf01a4bbf9e074f9fcf10.csv</src>
        <authentication>16a611e6b866f8552c70c6cb4c5f698a</authentication>
      </file>
      <file fileId="143" order="5">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8c74bde845d079abadf048bba0316db4.doc</src>
        <authentication>c06cb4848dbba3e5b81d80f0518d47b5</authentication>
      </file>
      <file fileId="149">
        <src>https://www.johnntowse.com/LUSTRE/files/original/0dfdf4ec4a7cc89c6cc485920a130a43.doc</src>
        <authentication>ebc62a1e24e476b869cb3c367f917845</authentication>
      </file>
    </fileContainer>
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3062">
                <text>Lights, Camera, Action: Investigating Advertisement Susceptibility in Films Amongst Individuals with Parkinson’s Disease and Controls. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3063">
                <text>Elena Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3064">
                <text>07.09.2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3065">
                <text>Product placement is the merging of entertainment with advertising, and its presence in our daily lives is increasing. Despite this, there is an inherent lack of consideration of its influence amongst vulnerable populations such as individuals with Parkinson’s disease (PD). Research suggests that individuals with PD have reduced inhibitory control (IC) which may drive impulsive behaviours. A concernment, therefore, is the influence that product placement may have on the purchase behaviour of individuals with PD alongside a possible propensity to partake in risky and impulsive behaviours. Thus, this study aimed to examine whether reduced IC increases the likelihood that an individual with PD will be susceptible to product placement. The study adopted an experimental approach, recruiting 20 healthy younger controls, 20 healthy older controls, and 13 individuals with mild to moderate PD to participate in watching two films containing product placement; one featuring Coca Cola and the other an Audi. A pre and post product placement questionnaire was used to measure change in purchase behaviour before and after exposure to product placement, and an antisaccade eye tracking task and a Stroop task was used to measure IC. An ANOVA indicated that IC was significantly impaired in individuals with PD compared to healthy controls.  Despite this, linear mixed effects modelling suggested that IC may not be a factor that increases the likelihood that an individual will be more susceptible to product placement. Implications of these findings are discussed relative to other clinically vulnerable populations with similar cognitive impairment symptomology, and the consequent need for future research to continue to explore product placement susceptibility amongst vulnerable populations. &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3066">
                <text>Parkinson’s Disease, Inhibitory Control, Product Placement Susceptibility &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3067">
                <text>Method&#13;
Participants&#13;
A voluntary sample of 54 participants were recruited, 20 healthy younger controls (YC) (16 females and four males, (Mage= 22.70, SDage= 2.42)), 20 healthy older controls of comparable age to those with Parkinson’s (OC) (females and males, (Mage= 66.85, SDage= 8.53)), and 15 adults with mild-moderate idiopathic PD (females and males, (Mage= 65.00, SDage= 7.84)). As this research area is entirely novel this sample size was modelled on comparable population studies that have explored IC (Meyer et al., 2020; Paz-Alonso et al., 2020).  YC were defined as young adults aged between 18 to 26 years old with no neurological or cognitive conditions (Stroud et al., 2015). OC were defined as adults aged between 50 to 85 years old with no neurological or cognitive conditions (Zhang et al., 2020). The participants with PD had been diagnosed with mild-moderate idiopathic PD, characterised by mild-moderate impairments of motor and cognitive functioning (DeMaagd &amp; Philip, 2015). &#13;
The exclusion criteria for both the healthy controls and individuals with PD were those who had a diagnosis of any additional neurological or cognitive conditions other than PD. Moreover, given that visual impairments may affect the visual experience of product placement, all participants were screened for red-green colour blindness using the Ishihara test. The standardised cut off for normal vision is 15 (Rodriguez-Carmona &amp; Barbur, 2017), therefore, participants who score 14 or less were excluded as this is indicative of the presence of red-green colour blindness. &#13;
All participants had normal or corrected-to-normal vision. The Addenbrooke’s Cognitive Examination-III (ACE) was used to screen for the presence of cognitive impairment (Bruno &amp; Vignaga, 2019). Participants’ data was only included in analysis if participants achieved a score within the normal range (≥ 82 out of 100). Following this exclusion criteria, one PD participant’s data was removed. Research has shown saccadic eye movements to be influenced by cognitive dysfunction (Hutton, 2008; MacAskill et al., 2012), thus cognitive impairments need to be screened for as this study is measuring saccadic eye movements as a measure of IC. Subsequently, following exclusion criteria, 53 participants’ data was included within analysis.  &#13;
PD participants were selected who were at a Hoehn and Yahr Stage three or less (see Table 1 for background characteristics for participants attached in the files below). The Hoehn and Yahr is used to give a summary of the laterality and severity of PD symptomology (Readman et al., 2021b). Five participants presented unilateral symptoms only (stage one), seven participants presented bilateral symptoms with no impairment of balance (stage two) and one participant presented bilateral symptoms with some postural instability but were not physically dependent (stage three). PD symptomology was assessed using the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) (Evers et al., 2019). All PD participants were tested under their usual medication regimes and were in a typical functioning ‘ON’ phase. Eight participants were taking a dopamine agonist (e.g., Ropinirole), eight participants were taking a combination drug (e.g., Madopar), six participants were taking a monoamine oxidase inhibitor (e.g., Rasagiline), and two participants were taking a Catechol-O-Methyl Transferase (e.g., Entacapone). &#13;
YC were recruited through the researcher’s social network. Whereas both OC and individuals with PD were recruited established research interest databases (OC C4AR database; PD MRR PD interest database (FST2005)).  &#13;
Materials&#13;
Health and Demographic Questionnaire&#13;
	The health and demographic questionnaire (HADQ) was developed and distributed using Qualtrics (Qualtrics, 2022), an online software that aids the process of building, distributing, and analysing surveys (Carpenter et al., 2019). The HADQ was comprised of four distinct subsections pertaining to both the participants general demographics, and more specific health related measures.&#13;
	Demographic Questions. For participant group allocation, participants were asked for their age, sex, and whether they held a diagnosis of PD. Information about participants’ age also afforded the opportunity for exploration into the possible effect of age as well as PD on product placement susceptibility.  &#13;
The Hospital Anxiety and Depression Scale (HADS). The HADS is a 14 item (7 items pertaining to anxiety and 7 items pertaining to depression) self-report assessment of anxiety and depression suitable for both psychiatric and non-psychiatric populations (Stern, 2014). All items are rated on a 4-point severity scale with a total score of 11 or more being indicative of probable anxiety and depression respectively (Caci et al., 2003; Edelstein et al., 2010). Literature has found HADS to be high in construct validity and very good internal consistency was observed when measuring anxiety (Cronbach’s α = .83) and depression (Cronbach’s α = .82) (Bjelland et al., 2002; Johnston et al., 2000; Mondolo et al., 2006). &#13;
	Edinburgh Handedness Inventory. The Edinburgh Handedness Inventory is a 10-item self-report questionnaire in which participants are asked to indicate a preference for which hand they would use when completing a range of daily activities (e.g., brushing teeth) (Robinson, 2013). Through this a handedness score ranging from 100 (strong right) to -100 (strong left) deduced.  Excellent internal consistency was observed in the 10-item Edinburgh Handedness Inventory (Cronbach’s α = .94) (Fazio et al., 2013). Previous literature suggests that handedness and eye-dominance are correlated because of hemispheric specialisation (McManus,1999; Willems et al., 2010), therefore establishing participants’ handedness was indicative of their dominant eye when measuring IC through saccadic eye movements. &#13;
PD Diagnosis questions. Participants with PD were asked to provide specifics relating to their diagnosis, including years since diagnosis, years since presumed onset, and what medication, and its dosage, they are prescribed. These items were necessary to investigate whether PD severity and medication type influence product placement susceptibility.&#13;
Screening Assessments&#13;
	Cognitive Impairments. The Addenbrooke’s Cognitive Examination-III (ACE) is a cognitive assessment that screens for the probable presence of cognitive impairments (Noone, 2015). The ACE is comprised of 24 items that analyse attention, memory, fluency, language, and visuospatial processing (Bruno &amp; Vignaga, 2019). Very good internal consistency was observed in the ACE (Cronbach’s α = .88) (Kan et al., 2019) and validity (Matias-Guiu et al., 2017; Takenoshita et al., 2019). &#13;
Visual Impairments. The Ishihara test is a reliable (Birch, 1997) 17 item assessment for red-green colour blindness that requires participants to read aloud a set of numbers on Ishihara plates that are made up of coloured dots (Marey et al., 2015). &#13;
PD Symptomology. MDS-UPDRS is a tool to measure the progression of PD symptomology (Evers et al., 2019). MDS-UPDRS is comprised of a series of tasks that assesses PD symptomology within the last week, in the domains of mentation, behaviour and mood, activities of daily life, motor abilities, and complications of therapy (Holden et al., 2018). Very good internal consistency was observed in the MDS-UPDRS (Cronbach’s α = .90) (Abdolahi et al., 2013) and valid assessment of PD symptomology severity (Goetz et al., 2008; Metman et al., 2004). &#13;
Measures of Inhibitory Control &#13;
	Eye Tracking Tasks. The prosaccade and antisaccade tasks were created using Experiment Builder Software Version 1.10.1630 and the data was extracted and analysed using Data Viewer Software. Eye movements were recorded via the EyeLink Desktop 1000 at 500 Hz. Whilst recording eye movements, participants were asked to place their chin on a chin rest to reduce their head movements. Participants sat approximately 55cm away from the computer monitor (monitor run at 60Hz). &#13;
Firstly, participants were asked to complete the 4-point calibration task to improve eye tracking accuracy (Pi &amp; Shi, 2019). In this task participants were asked to follow a red target around the screen as it moved up, down, left, and right. Next, participants completed the prosaccade eye tracking task. To centralise participants’ gaze, participants were instructed to look at a white fixation target displayed on a computer screen for 1000ms. Participants were then instructed to look towards a red lateralised target that appeared on screen for 1200ms at a 4o visual angle either to the left or to the right of where the white central dot had been located, as quickly and as accurately as possible (Readman et al., 2021a). The eye tracking equipment measured participants’ saccades and latencies (how long it took for participants to fixate on the red target). A total of 16 gap trials were presented with a blank interval screen displayed for 200ms between the extinguishment of the white fixation target and the initial appearance of the red target, which resulted in a temporal gap in stimuli presentation. The prosaccade task was incorporated to ensure that alternations in participants antisaccade task performance were not due to impaired prosaccades and rather are indicative of alterations in IC. &#13;
For the antisaccade task, participants were first asked to look at a central white fixation dot for 1000ms to centralise their gaze. Participants were then asked to direct their gaze and attention focus to the opposite side of the screen to where a green lateralised target was presented for 2000ms at a 4o visual angle either to the left or to the right of where the white central dot had been located, as quickly and accurately as possible (Derakshan et al., 2009). See figure 1 above for a visual display of an antisaccade task. The eye tracking equipment measured participants’ saccades, latencies (how long it took participants to fixate their gaze to the opposite direction to the green target), and error rates (how many time participants incorrectly looked at the green target). A total of 16 gap trials were presented with a blank interval screen displayed for 200ms between the extinguishment of the white fixation target and the initial appearance of the red target, which resulted in a temporal gap in stimuli presentation. &#13;
	Stroop Test. The Stroop test was conducted using PsyToolkit’s free online demonstration (PsyToolkit, 2022). Unlike in the original Stroop test whereby participants had to say the ink colour aloud (Stroop, 1935), using PsyToolkit’s online Stroop test allowed for a more accurate measurement of participant’s reaction time (ms) through pressing the key corresponding to the ink colour (Brenner &amp; Smeets, 2018). Participants completed the Stroop test on a HP ProBook 470 G5 17.3” laptop (HP, 2022), and were sat approximately 30cm away from the laptop. Presenting the Stroop test on this laptop enabled participants to view the test on a large screen, thus improving the accessibility of the test. The colour words presented to participants were ‘red’, ‘green’, ‘yellow’, and ‘blue’.&#13;
	Participants were instructed to press the key corresponding to the initial letter of the ink colour of the printed word presented on screen as quickly and accurately as possible. For example, the correct answer for RED would be if the participant pressed the key ‘B’ for blue. A total of 40 gap trials were presented. For each trial, a colour word was presented on screen for 2000ms. The colour word was either congruent (the colour word and the meaning are the same, e.g., GREEN) or incongruent (the colour word and the meaning is different, e.g., GREEN). There was a 100ms gap in presentation of the word in which a white cross was presented on a black interval screen. Participants’ congruent and incongruent reaction times (ms), correct Stroop score (correctly identified ink colour out of 40), and Stroop effect (incongruent reaction time (ms) minus congruent reaction time (ms)) were recorded.&#13;
The ease at which the Stroop test can be conducted in a non-laboratory environment and the simplicity at which the colour words can be translated into other languages, increases its accessibility and universality as a measure of IC (Gass et al., 2013). This assessment would, however, be an invalid measure of IC for individuals affected by colour blindness or dyslexia, limiting the populations the Stroop task can assess (Scarpina &amp; Tagini, 2017). &#13;
Product Placement Film Clips&#13;
The incorporation of film clips containing product placement was guided by the prominent use of film clips within previous research that had investigated product placement susceptibility (Kamleitner &amp; Jyote, 2013; Yang &amp; Roskos-Ewoldsen, 2007). Jurassic World featuring Coca Cola and Avengers Endgame featuring Audi were chosen as they were popular films that contained product placement that both younger and older adults would recognise (Malaj, 2022), minimising the effects of familiarity. Furthermore, these two film clips were chosen as they contained product placement of products of different monetary value products. Thus, controlling for the potential effects of monetary value on product placement susceptibility (McDermott et al., 2006). &#13;
	Both film clips were downloaded from Youtube and trimmed to last approximately one minute each to lessen the study length because of the propensity for individuals with PD to tire because of the symptomology they present with (see Appendix A for the screen shots of the two film clips). The two film clips were shown on a HP ProBook 470 G5 17.3” laptop because the large screen enhanced participants’ visual experience of product placement (HP, 2022).&#13;
Measure of Purchase Intention&#13;
	Separate pre and post product questionnaires for each clip were made using Qualtrics (Qualtrics, 2022). To measure purchase behaviour, participants were asked how strong their preference was to buy those drink/car brands on a Likert scale of one to seven (from one = “Extremely unlikely” to seven = “Extremely likely”). Literature has found 7-point Likert scales to be a more reliable scale because it allows for more accurate and differentiated responses than smaller scales like 5-point Likert scales (Cicchetti et al., 1985; Finstad, 2010). The use of a 7-point Likert scale therefore gained a more sensitive and accurate measurement of product placement susceptibility. Both the pre and post product placement questionnaires asked participants the same questions therefore enabling us to measure if there was a change in participants’ responses prior to and after exposure to product placement (Matthes et al., 2007).&#13;
Design&#13;
	The study used a 3 between (Participant Status: Healthy Young Controls vs. Healthy Older Controls vs. Individuals with Parkinson’s Disease) x 2 within (Product Placement Category: Drink vs. Car) mixed-subjects design.&#13;
Procedure&#13;
As this study recruited a vulnerable population, the information sheet was sent to participants via email 48 hours prior to the in-person study. This afforded participants the time to ask questions or express any concerns about the study before then being sent the consent form 24 hours prior to commencing the in-person study. Once participants had read and completed the digital consent form, participants were sent the digital HADQ. The HADQ took participants approximately 10 minutes.  &#13;
	Prior to the main study, participants were screened for cognitive impairment, using the ACE, and visual impairment, using the Ishihara test. At this time the severity of Parkinson’s symptomology was assessed using the MDS-UPDRS where appropriate.&#13;
	On completion of all pre-study screening, participants were asked to firstly complete a prosaccade eye tracking task and then an antisaccade eye tracking task which took approximately 10 minutes. &#13;
	Participants were then asked to complete a pre product placement questionnaire and then watch a short film clip. After watching the film clip, participants were asked to complete a post product placement questionnaire. Finally, participants were asked to complete the Stroop test which took approximately five minutes to provide a further measure of IC and to act as a buffer in time. &#13;
	This process was repeated for a second product category condition. The order of condition completion was randomly counterbalanced across participants to increase internal validity by minimising the potential for order effects (Corriero, 2017). The in-person study lasted approximately an hour for healthy controls and an hour and 30 minutes for PD. At the end of the study, participants were read and given a copy of the debrief sheet, thanked for their participation and time, and given £10 as a contribution towards travel expenses. All raw data was stored on the Lancaster University OneDrive, on a password-protected computer.&#13;
Data Analysis&#13;
	The raw data from the prosaccade and antisaccade tasks were extracted using the EyeLink DataViewer Software (Version 3.2) and processed using the bespoke software SaccadeMachine (Mardanbegi et al., 2019). Noise in the dataset was removed by filtering out frames with a velocity signal greater than 1,500 deg/s or with an acceleration signal greater than 100,000 deg2/s. The EyeLink Parser was used to detect fixations and saccadic events. Saccades were extracted alongside multiple temporal and spatial variables. Trials were excluded in cases when the participant did not direct their gaze to the central fixation target. The onset of target display was a temporal window of 80-700ms, thus anticipatory saccades made prior to 80ms and excessively delayed saccades made after 700ms were removed.&#13;
	To improve data analysis reproducibility, statistical analyses were conducted using RStudio (version 2022.09.0) (Quick, 2010). To prepare the Stroop test data for analysis, participants’ Stroop scores (correctly identified ink colour out of 40), congruent and incongruent trial reaction times (ms), and Stroop effect (incongruent trials reaction time (ms) minus the congruent trials reaction time (ms)) were downloaded from Psytoolkit into an Excel file. IC was operationalised as the Stroop effect (Kane &amp; Engle, 2003). &#13;
	To investigate the susceptibility to product placement, a difference in purchasing behaviour score was calculated for each product. To do so, the pre product placement ratings of the likelihood of purchasing each brand were subtracted from the post product placement ratings of the likelihood of purchasing each brand. A positive difference was indicative of participants being more likely to buy the featured product after exposure to product placement, a negative difference suggested that participants were less likely to buy the featured product, and a difference of zero indicated no change in purchase behaviour. &#13;
	First to confirm the assumption that is impaired in individuals with PD compared to healthy controls, three separate between-factor ANOVAs were performed to compare the main effect of group (YC, OC, and PD) on antisaccade latency, antisaccade error rate, and Stroop effect (See Appendix B for R code). A between-factor ANOVA was chosen because it compares three or more categorical groups to establish whether there is a significant difference on a dependent measure (Henson, 2015). As ANOVA results only identify a difference between groups, post hoc Tukey HSD tests for multiple comparisons were conducted to determine where the differences lie between groups (Abdi &amp; Williams, 2010). &#13;
	To investigate whether IC influences product placement susceptibility, a linear mixed effects modelling (LMM) was fitted. The LMM fitted incorporated difference in purchase behaviour scores (differencescore) as the outcome, and group (PD v Healthy older control v Healthy younger control) and measures of IC (antisaccade latency, antisaccade error rate, and Stroop) as the fixed effects. Given that IC is part of an individual’s executive function (Crawford et al., 2002), ACE score (as a measurement of the participants overall cognitive function; Noone, 2015) was also fitted as a fixed effect. As LMM allows for the analysis of fixed effects of independent variables, whilst also considering unexplained differences corresponding to random effects like participant variation (Baayen et al., 2008).  Random effects of both participants and product (Car or Drink) on intercepts were added (See Appendix C for R code). The LMM was fitted using the Satterthwaite adjustment method in lme4 package (Bates et al., 2014) in R (version 2022.09.0) (Quick, 2010). &#13;
Ethics&#13;
	This study received ethical approval from the Psychology Department Research at Lancaster University on the 22/06/2022 and complied to The British Psychological Society’s guidelines (2014).&#13;
&#13;
&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3068">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3069">
                <text>Data/R.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3070">
                <text>Ball2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3071">
                <text>Elena Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3072">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3073">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3074">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3075">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3076">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3077">
                <text>Dr Megan Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3078">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3079">
                <text>Psychology of Advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3080">
                <text>53 Participants. 20 healthy younger controls, 20 healthy older controls, 13 individuals with mild-moderate Parkinson's disease</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3081">
                <text>ANOVA&#13;
Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="147" public="1" featured="0">
    <fileContainer>
      <file fileId="137">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f32d9fb1ed51218774543381b3025654.xlsx</src>
        <authentication>9d383cde2bea34174cef2f6b085935ca</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3042">
                <text>                             Do inward and outward consonants and vowels&#13;
have different effects on customer’s liking rates&#13;
towards the brand names?&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3043">
                <text>Keung Wang Shan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3044">
                <text>5/9/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3045">
                <text>The origin of speech development starts with the way that infants or children produce their first words. In the early stage of speech acquisition, children tend to produce particular syllables that are low in energy to produce, such as intrasyllabic and intersyllabic consonant-vowel co-occurrence patterns (MacNeilage et al., 2000). Such patterns may have an effect on individual’s preference for words later in life, such as for brand names. More pointedly, according to Topolinski et al. (2014), there is an in-out effect which significantly affect individual’s liking rates towards the brand names that contain inward and outward consonants. However, previous findings have only focused on such effects on consonants, whereas there is insufficient research on the combination effects of consonants and vowels on brand names. Therefore, this study is designed to investigate whether such in-out effects of both consonants and vowels of English brand names have association with customer’s emotional response to the words, as well as whether the involvement of MacNeilage syllables in the brand names are associated with customer’s liking rate. The whole experiment was conducted through an online questionnaire consisting of 360 sound stimuli to test on participant’s liking rate towards the brand names which are non-words with the combination of inward and outward consonants and vowels, and Macneilage syllables. Results of the study showed that liking rates towards the brand names are significantly increased for the ones that include inward consonants and vowels, while lower liking rates were associated with outward consonants and vowels. Not to mention, no significant relationship was found between the number of MacNeilage syllables and one’s preference towards the brand names, yet individuals had higher preference for brand names that contained MacNeilage syllables as the first syllable of the word. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3046">
                <text>Consonants, vowels, MacNeilage syllables, brand names, liking rates</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3047">
                <text>Participants&#13;
A total of 51 participants who spoke different first languages were recruited through researcher’s family and friends as well as invited via SONA. They were all healthy individuals with normal vision and hearing, all aged 18 or above with no health conditions. The participants included 23 males and 28 females, with the age range from 22 to 28 and a mean age of 23.33, SD=.&#13;
Materials&#13;
The study was carried out as an online questionnaire which consisted of four open ended questions at the beginning and 360 questions with a 10-point Likert scale to display the answers. The whole questionnaire was based on the liking rate of the brand names that were presented as sound stimuli displayed in the questionnaire. The first four open ended questions were designed to ask participants’ age, gender, first language and whether they speak other languages (see Appendix D). Next, 360 questions each containing an audio of a sound stimulus that was between one to three seconds were presented in the questionnaire (see Appendix D). All sound stimuli were recorded by the researcher’s supervisor who was a native English speaker with a Northern English accent with training in phonology beforehand, which were also produced in a monotone. Within the 360 sound stimuli, they were divided into six different sets which included six combinations of inward and outward consonants and vowels. The total six sets of stimuli included nonwords that contained consonants that required the articulation from front to the middle to back of the mouth (inward) (FMB), from front to back to middle (FBM), from middle to front to back (MFB), from middle to back to front (MBF), from back to middle to front (outward) (BMF) and from back to front to middle (BFM). There was a total of 60 stimuli with the same articulation of consonants and different articulation of vowels in each set, and 10 stimuli with the same articulation of both consonants and vowels in each set. Within each set of the same articulation of consonants, six possible combinations of front/middle/back vowels were paired up with the consonants to create the stimuli so that every possible arrangement of front/middle/back consonants and vowels was tested in the questionnaire. Moreover, among the 360 stimuli, 120 of them contained zero MacNeilage syllables, 178 of them contained one MacNeilage syllables while 62 of them contained three MacNeilage syllables. To ensure that there was no personal bias towards the brand names, all stimuli were nonwords that were created by the researcher so that participants would not be familiar with any of the brand names.&#13;
Procedure&#13;
Before the study began, all participants were sent a participant information sheet and consent form through email (see Appendix A &amp; B). Participants were then also given a link to the online questionnaire which was attached in the same email. At the beginning of the questionnaire, four open-ended questions on personal information were presented and participants were asked to answer their age, gender, first language and whether they speak other languages (see Appendix D). After completing the four questions, participants had to answer 360 questions with each containing an audio of a sound stimuli, which were referred as brand names in this survey. Each question was displayed as ‘how much do you like this brand name’ and participants were asked to rate each sound stimuli according to their preference on the 10-point Likert scale, labelled as 1 as the lowest and 10 as the highest (see Appendix D). There was a ‘play’ button in every question where participants could play the sound stimulus and they were allowed to play the audio as many times as they prefer if they wished. In the questionnaire, five questions were presented on each page and there was 73 pages in total, including one page in the beginning for the four open-ended questions. The 360 questions on the sound stimuli were presented in randomised order for each participant to ensure there were no order effects relating to individual stimuli in the data. The whole study took around 20 to 30 minutes depending on whether the participants replayed the audios or not. After completing the questionnaire, all participants were delivered a debrief sheet via email, allowing them to ask any questions regarding the study (see Appendix C).&#13;
Ethics&#13;
The study was granted ethics approval on 19/05/2022. Both a participants information sheet and consent form were delivered to all participants before the study began to indicate their rights to withdraw up to three weeks after participating in the experiment if they had changed their minds. After completion of the questionnaire, a debrief sheet was sent out to participants to allow them to raise questions regarding the study. They were also informed that their participation was confidential, with all data stored in encrypted files.&#13;
&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3048">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3049">
                <text>Data/Excel.csv&#13;
Data/R.r</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3050">
                <text>none</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3051">
                <text>Keung Wang Shan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3052">
                <text>open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3053">
                <text>none</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3054">
                <text>english</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3055">
                <text>data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3056">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3057">
                <text>Padraic Moonaghan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3058">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3059">
                <text>Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3060">
                <text>51 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3061">
                <text>Linear mixed effects modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
