<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=11&amp;sort_field=added" accessDate="2026-05-03T10:57:27+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>11</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="149" public="1" featured="0">
    <fileContainer>
      <file fileId="144">
        <src>https://www.johnntowse.com/LUSTRE/files/original/17e340bee54ebac611344515a86f9ff6.pdf</src>
        <authentication>4a222c6141db92dc7ee55aa00fb0d0ce</authentication>
      </file>
      <file fileId="145">
        <src>https://www.johnntowse.com/LUSTRE/files/original/896fd29b37e809eb53d43c14fa1b8eca.zip</src>
        <authentication>a0f3346a973237810f84764261f03f24</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3082">
                <text>Does implicit mentalising involve the representation of others’ mental state content? </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3083">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3084">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3085">
                <text>Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3086">
                <text>implicit mentalising, co-representation, joint action, domain specificity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3087">
                <text>Pre-test: Selection of Suitable Stimuli&#13;
Participants&#13;
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.&#13;
Stimuli and Materials&#13;
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.&#13;
Design and Procedure&#13;
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design. &#13;
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.&#13;
Main Experiment&#13;
Participants&#13;
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.&#13;
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52. &#13;
Stimuli and Materials&#13;
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians. &#13;
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).&#13;
Figure 1&#13;
Example of Stimuli Used in Simon Task &#13;
 &#13;
Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.&#13;
Design and Procedure&#13;
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.&#13;
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen  . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.&#13;
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.&#13;
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.&#13;
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding). &#13;
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).&#13;
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables. &#13;
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.&#13;
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.&#13;
Figure 2&#13;
Inclusion of Other in the Self (IOS) scale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3088">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3089">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3090">
                <text>Wong07092022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3091">
                <text>Malcolm Wong&#13;
Aubrey Covill&#13;
Elisha Moreton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3092">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3093">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3094">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3095">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3096">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3097">
                <text>Dr. Jessica Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3098">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3099">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3100">
                <text>25 in a pre-test, 65 in the main experiment</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3101">
                <text>Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="150" public="1" featured="0">
    <fileContainer>
      <file fileId="153">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2a6af9e3bd67966c26821868b9693304.pdf</src>
        <authentication>7822a912e947086abb3415b7484d575b</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3102">
                <text>Facts May Care About Your Feelings:  The Effects of Empirical and Anecdotal Evidence in the Perception of Climate Change </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3103">
                <text>Constance Jordan-Turner</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3104">
                <text>21/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3105">
                <text>Although the effects of humanmade climate change become ever more potent, the consensus gap between climate scientists and the public is as wide as ever. It is critical that climate change communication is improved to try and close this gap. There are several strategies that can be implemented, including using anecdotes alongside or instead of empirical evidence to elicit emotions. In this study, 74 members of the public completed a survey.  Participants were randomly assigned to one of four conditions which dictated the type of evidence they received: no evidence, empirical evidence, anecdotal evidence, or both empirical and anecdotal evidence.  Results suggest that, in general, there was no effect of evidence on participants’ perceptions of climate change. This result held even after controlling for worldview and ideology. These findings have implications for the theory of inserting emotion into climate change communication.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3106">
                <text>Climate change, communication, perception, emotion, evidence</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3107">
                <text>Participants and design&#13;
There were 74 participants (26 male; 46 female; one non-binary; one preferred not to say). The mean age of the participants was 37.99 (SD = 16.93). Participants were recruited via advertising the study on the researcher’s social media accounts (Facebook and Instagram) using a standardised advertisement (see Appendix A) and through word of mouth. Participants were all members of the general public. The study manipulated two independent variables in a between-participants design: anecdotal evidence (without-anecdotal vs. with-anecdotal) and empirical evidence (without-empirical vs. with empirical), resulting in four conditions. Participants were randomly allocated to one of the four conditions, subject to the constraint of equal cell numbers. &#13;
&#13;
This study gained ethical approval from the Faculty of Science and Technology Research Ethics Committee.&#13;
Participants and design&#13;
There were 74 participants (26 male; 46 female; one non-binary; one preferred not to say). The mean age of the participants was 37.99 (SD = 16.93). Participants were recruited via advertising the study on the researcher’s social media accounts (Facebook and Instagram) using a standardised advertisement (see Appendix A) and through word of mouth. Participants were all members of the general public. The study manipulated two independent variables in a between-participants design: anecdotal evidence (without-anecdotal vs. with-anecdotal) and empirical evidence (without-empirical vs. with empirical), resulting in four conditions. Participants were randomly allocated to one of the four conditions, subject to the constraint of equal cell numbers. &#13;
Evidence Passages&#13;
Empirical Evidence&#13;
The empirical evidence vignette included a statement explaining that human-induced carbon dioxide emissions and global average temperature have synchronously increased since pre-industrial times, accompanied with graphs demonstrating these upward trends.  The vignette also highlighted the scientific consensus that humanmade climate change is occurring and will have adverse consequences. Finally, the vignette explained that these adverse consequences had already begun to materialise.  The increase of extreme weather events was highlighted in a graph that showed the tripling of weather-related disasters between 1980 and 2010.  Finally, the vignette finished with references for the information it contained (see Appendix B).&#13;
Anecdotal Evidence&#13;
The anecdotal evidence vignette contained information about Storms Dudley, Eunice and Franklin which all made landfall in Britain in quick succession in 2022. The storms were a weather-related event that some scientists have linked to climate change (Barrett, 2022); Specifically, the vignette included information about the storms’ destructiveness, such as the cost of the damage they caused, and the number of people killed.  The destructiveness of the storms was highlighted with images of damage and flooding in Wells, Otley, and Brentwood, as well as an image from Blackpool demonstrating the height and power of the waves caused by the storms.  The vignette included a stock image of a man standing in a flooded living room and a short passage outlining the experience of a fictitious character named Matt Johnson whose family home had been severely flooded as a result of the storms. The vignette concluded with a statement from climate scientist Robert Klein who argued that the impact of the storm was exacerbated by climate change, which generated “super storm” conditions.  Finally, there was a reference to an article about the storms and their link to climate change (see Appendix C).&#13;
Measures&#13;
Table 1 contains an overview of the measures embedded in the questionnaire.  For the full questionnaire, please refer to Appendix D.&#13;
Disaster Belief&#13;
The disaster belief measure measured predicted estimates of the frequency of weather-related disasters that will occur in the listed years. Participants were given an approximate frequency for 2019 from the International Disaster Database. The measure consisted of six items: 2030, 2040, 2050, 2060, 2070 and 2080. Participants responded by typing in their estimated number next to the relevant year.&#13;
Harm Extent&#13;
The harm extent measure consisted of questions concerning how much harm that participants think climate change will cause themselves, their family, their community, Britain, other countries, and future generations. There were six items, such as ‘How much do you think climate change will harm you?’, and ‘How much do you think climate change will harm people in Britain?’ Responses were rated from (1) ‘not at all’ to (4) ‘a great deal’.&#13;
Harm Timing&#13;
	The harm timing measure consisted of questions concerning when participants thought climate change will cause harm to themselves, their family, their community, Britain, other countries, and future generations. There were only two items, ‘When do you think climate change will begin to harm Britain?’ and ‘When do you think climate change will begin to harm other countries?’. Responses were rated as (1) ‘Never’, (2) ‘100 years’; (3) ‘50 years’; (4) ‘25 years’; (5) ‘10 years’ and (6) ‘Right now’.&#13;
CO2 Attributions&#13;
	The CO2 attributions measure measured how much participants think human carbon dioxide emissions contribute to events such as heatwaves, rising sea levels, flooding, and Storms Dudley, Eunice, and Franklin. There were six items, such as ‘CO2 contribution to the observed increase in atmospheric temperature during the last 130 years’, ‘CO2 contribution to the European heat wave in 2022 that killed over 5,000 people’, and ‘CO2 contribution to storms Dudley, Eunice, and Franklin in the UK (2022)’. These responses were gathered using a sliding scale from 0 to 100%.&#13;
Intention&#13;
The intention measure consisted of questions asking about participants’ pro-environmental intentions. There were seven items. Examples of items include ‘I will take part in an environmental event (e.g., Earth hour)’, ‘I will give money to a group that aims to protect the environment’, and ‘I will switch to products that are more environmentally friendly’. The response options were simply ‘Yes’ or ‘No’.   &#13;
Mitigation&#13;
	The mitigation measure consisted of questions asking about participants’ support for mitigating policies. There were five items. Example items include, ‘Signing an international treaty that requires Britain to cut its carbon dioxide emissions by 90% by 2050’, ‘Adding a surcharge to electrical bills to establish a fund to help make buildings more energy efficient and to teach British citizens how to reduce energy use’, and ‘Providing tax rebates for people who purchase energy-efficient vehicles or solar panels’. Responses were rated from (1) ‘Strongly Oppose’ to (4) ‘Strongly Support’.&#13;
CO2 Adjustment&#13;
	The CO2 adjustment measure measures how much participants think Britain should adjust its CO2 emissions over the next 10 years. There was only one item: ‘How much should Britain adjust CO2 emissions during the next 10 years?’. Responses were rated from (1) ‘Not at all’ to (6) ‘Reduce by 50%’.&#13;
Free-Market Support&#13;
	The free-market support measure consisted of questions asking about participants’ support for the free market. There were five items. Examples items include, ‘An economic system based on free-markets, unrestrained by government interference, automatically works best to meet human needs’ and ‘The preservation of the free-market system is more important than localized environmental concerns’. Two items, ‘Free and unregulated markets pose important threats to sustainable development’ and ‘The free-market system is likely to promote unsustainable consumption’, required reverse coding upon analysis.&#13;
Table 1&#13;
Measures embedded within the questionnaire. The first column contains the name of the measures; the second column contains the instructions on how to respond to items in that measure; and the third column describes how answers to the items were coded.   &#13;
Measure Name	Questions	Coded Response&#13;
Disaster belief	Please provide an estimate of the frequency of weather-related disasters that will occur in each year (6 items).	Participants used the keyboard to type in a number for each year.&#13;
Harm extent	The following items examine your thoughts about the extent of harm that will be caused by climate change (6 items).	4-point scale: (1) ‘Not at all’; (2) ‘A little’; (3) ‘A moderate amount’; (4) ‘A great deal’.&#13;
Harm timing	The following items examine your thoughts about when climate change will begin to cause harm (2 items).	6-point scale: (1) ‘Never’; (2) ‘100 years’; (3) ‘50 years’; (4) ‘25 years’; (5) ‘10 years’; (6) ‘Right now’.&#13;
CO2 attribution	For each of the following questions, please estimate the contribution from human CO2 emissions to cause each event. For example, 0% would mean humans are not at all responsible, whereas 100% would mean that human CO2 emissions are fully responsible&#13;
	Participants used the mouse to place their response on a sliding scale. The sliding scale contained the numbers, ‘0’, ‘10’, ‘20’, ‘30’, ‘40’, ‘50’, ‘60’, ‘70’, ‘80’, ‘90’, and ‘100’. &#13;
&#13;
&#13;
&#13;
Pro-environmental intentions	Please indicate whether or not you will engage in the following actions (7 items).	0 = No&#13;
1 = Yes&#13;
Mitigation	How much do you support or oppose the following policies (five items).  	4-point scale; (1) ‘Strongly Oppose’; (2) ‘Oppose’; (3) ‘Support’; (4) ‘Strongly Support’.&#13;
CO2 adjustment	How much should Britain adjust CO2 emissions during the next 10 years?	6-point scale; (1) ‘Not at all’; (2) ‘Reduce by 10%’; (3) ‘Reduce by 20%’; (4) ‘Reduce by 30%’; (5) ‘Reduce by 40%’; (6) ‘Reduce by 50%’.&#13;
Free-market belief	Please indicate how much you agree with each statement (5 items).	5-point scale: (1) ‘Strongly Disagree’; (2) ‘Disagree’; (3) ‘Neutral’; (4) ‘Agree’; (5) ‘Strongly Agree’.&#13;
Demographic questions	What is your age?	Participants used the keyboard to type in a number.&#13;
	What is your gender?	1 = Male; 2 = Female; 3 = Non-binary; 4 = Other; 5 = Prefer Not to Say&#13;
&#13;
Procedure&#13;
All participants completed a questionnaire assessing their belief in and concern about humanmade climate change and their mitigation beliefs.  The questionnaire was administered online using Qualtrics survey software.  Participants responded to the questionnaire by using either the mouse to select answers or the keyboard to type in numbers. &#13;
At the beginning of the questionnaire, all participants received an information sheet about the aim of the study, the lack of risks associated with participating, and how participant information is stored. Participants were asked to indicate their informed consent. For the full participant information sheet and consent form, please refer to Appendix E. After participants gave their consent and continued onto the survey, they were asked their age and gender. They were then presented with evidence according to the condition they were assigned to.  There were four conditions: no evidence, empirical evidence, anecdotal evidence, and both empirical and anecdotal evidence.&#13;
After they had read one or both evidence passages, participants answered the disaster belief measure. Next, they answered the CO2 attribution measure. Then they answered the harm extent measure and the harm timing measure. After that was the intention measure, and then they answered the mitigation measure. In the final part of the questionnaire, they were asked how much Britain should cut its CO2 emissions over ten years, and then questions on their support for the free market. Participants were then asked demographic questions about their age and gender. Finally, the participants were given a debrief sheet (Appendix F).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3108">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3109">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3110">
                <text>Jordan-Turner2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3111">
                <text>Sacha Crossley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3112">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3113">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3114">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3129">
                <text>Dr. Mark Hurlstone</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3130">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3131">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3132">
                <text>74</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3133">
                <text>ANCOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="151" public="1" featured="0">
    <fileContainer>
      <file fileId="150">
        <src>https://www.johnntowse.com/LUSTRE/files/original/54ff2b32ca6ddc076571e720c7f80444.pdf</src>
        <authentication>1c7c86c045532986fdad17219d9d6e82</authentication>
      </file>
      <file fileId="151">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6ee62233e0839f9c2766d58b4b93b348.pdf</src>
        <authentication>1c7c86c045532986fdad17219d9d6e82</authentication>
      </file>
      <file fileId="152">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6bb01a175bd17e9527b8e3c400460fb2.pdf</src>
        <authentication>1c7c86c045532986fdad17219d9d6e82</authentication>
      </file>
    </fileContainer>
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3115">
                <text>Eye tracking and Attention Deficit Hyperactivity Disorder (ADHD): Can eye tracking identify the feigning of ADHD?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3116">
                <text>Reva Maria George </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3117">
                <text>7/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3118">
                <text>When diagnosing adult ADHD, it has proven difficult for clinicians to detect deceptive behaviour. Diagnosis of ADHD comes with economic, academic, and recreational benefits, which may account for the increasing feigning of the disorder. Current diagnostic methods: clinical interviews and self-report scales can be easily manipulated for a positive diagnosis. Hence the present study evaluated the utility of eye tracking devices to detect the feigning of ADHD. Eye movements of 38 participants (7 ADHD, 15 healthy controls, and 16 healthy feigners) were captured throughout the prosaccade and anti-saccade task. The performance of the participants on the task was evaluated in terms of latency and the percentage of error rate. The findings of the study reveal a significant difference in the latency of anti-saccade tasks i.e., feigners have an increased latency compared to healthy controls and ADHD participants. Because of the limited sample size, study findings cannot be generalized. Further investigations are needed with a much larger sample.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3119">
                <text>Eye-tracking, ADHD, Feigning, Prosaccade task, Anti-saccade task, latency, error rate, eye movements</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3120">
                <text>Method&#13;
Participants &#13;
 Previous studies explaining feigning in ADHD acquired data from around 90-100 samples (Booksh et.al., 2010; Frazier et.al., 2008; Harrison et.al., 2007). The study therefore aimed to recruit 90 participants, 30 each in ADHD, healthy controls, and healthy feigners faking the disorder. Participants with and without a clinical diagnosis of ADHD were selected using the opportunity sampling method. A total of 42 participants between the age of 18-35 volunteered and were recruited for the study through the university disability service (11%), posters (16%) and through word of mouth (73%). Data of two participants were removed as the eye tracker repeatedly lost the pupil during recording. All participants were rewarded with an equal chance to win one of 6 £25 vouchers. Thirty-one of the 42 participants were healthy younger adult controls. Of the healthy control participants 15 (7 females; Mage = 24.33; SDage=4.32) participated as healthy controls, and the remaining 16 (9 females; Mage = 24.25; SDage=1.88) as healthy feigners. Seven ADHD participants (6 females) with a mean age 22.71 (SD=2.22) completed the study. The severity of the ADHD symptoms was analysed using the Adult ADHD self-report scale (for more demographic details see Table 1). The exclusion criteria include participants: 1) with any visual (other than corrected-to-normal vision) impairment 2) with any cognitive impairment 3) with additional diagnosis of neurological conditions 4) without a proper clinical diagnosis of ADHD. The exclusion criteria were applied because these impairments may interfere with the participants performance in the task.  &#13;
Prior to data analysis, one of the participants was removed from ADHD group due to the lack of proper clinical diagnosis. Furthermore, a control participant was excluded with the assumption of having a probable mild cognitive impairment because the individual scored less than 82 (cut-off) in the Addenbrooke’s Cognitive Examination-III (ACE-III) (see Table 1 for further demographic details). &#13;
Participants &#13;
 Previous studies explaining feigning in ADHD acquired data from around 90-100 samples (Booksh et.al., 2010; Frazier et.al., 2008; Harrison et.al., 2007). The study therefore aimed to recruit 90 participants, 30 each in ADHD, healthy controls, and healthy feigners faking the disorder. Participants with and without a clinical diagnosis of ADHD were selected using the opportunity sampling method. A total of 42 participants between the age of 18-35 volunteered and were recruited for the study through the university disability service (11%), posters (16%) and through word of mouth (73%). Data of two participants were removed as the eye tracker repeatedly lost the pupil during recording. All participants were rewarded with an equal chance to win one of 6 £25 vouchers. Thirty-one of the 42 participants were healthy younger adult controls. Of the healthy control participants 15 (7 females; Mage = 24.33; SDage=4.32) participated as healthy controls, and the remaining 16 (9 females; Mage = 24.25; SDage=1.88) as healthy feigners. Seven ADHD participants (6 females) with a mean age 22.71 (SD=2.22) completed the study. The severity of the ADHD symptoms was analysed using the Adult ADHD self-report scale (for more demographic details see Table 1). The exclusion criteria include participants: 1) with any visual (other than corrected-to-normal vision) impairment 2) with any cognitive impairment 3) with additional diagnosis of neurological conditions 4) without a proper clinical diagnosis of ADHD. The exclusion criteria were applied because these impairments may interfere with the participants performance in the task.  &#13;
Prior to data analysis, one of the participants was removed from ADHD group due to the lack of proper clinical diagnosis. Furthermore, a control participant was excluded with the assumption of having a probable mild cognitive impairment because the individual scored less than 82 (cut-off) in the Addenbrooke’s Cognitive Examination-III (ACE-III) &#13;
Stimuli and Apparatus &#13;
Addenbrooke’s Cognitive Examination-III (ACE-III) &#13;
The ACE-III, developed by Hodges et.al, is an extended cognitive screening technique. The items of the test produce 5 sub-scores totalling 100, with each sub-score corresponding to a different cognitive domain, such as attention (18 points), memory (26 points), verbal fluency (14 points), language (26 points), and visuospatial skills (16 points) (Noone, 2015). Higher scores indicate superior cognitive functioning within the given domain. The validated cut-off point for normal cognitive functioning is 82/100, therefore individuals who yield a total score of &lt; 82 are assumed to have probable mild cognitive impairment. The ACE-III has proven reliability (α= 0.88), sensitivity (0.93), specificity (1.0) and concurrent validity with alternative cognitive assessments such as the ACE-R (r= 0.99, p &lt; 0.01; Hsieh, 2013).  &#13;
Ishihara Colour blindness test &#13;
Ishihara colour blindness developed by Dr Shinobu Ishihara, was used to assess the colour vision deficiency of congenital origin, particularly red-green deficiency (Ishihara, 2011). It consists of 24 coloured plates containing a circle of dots with random colours and numbers. Each plate includes primary and secondary colour dots, with the primary colours appearing in patterns or numbers, while secondary colours appear as the background (Shaygannejad et.al., 2012). Plates 1–15 were utilised because of the fact that the main goal was to separate the colour defects from the normal colour appreciation simply. The participants were instructed to read out the numbers aloud, without more than three seconds' delay. A participant with an error in reading the numbers of two or more plates were considered to be having an impaired colour vision. &#13;
Royal Air Force (RAF) ruler &#13;
The RAF near point rule is a 50cm long square rule with a cheek rest and slider holding a revolving four-sided cube. One of the 4 sides has a vertical line with a central dot for convergence fixation. It is used for determining the near point of convergence (NPC) (Sharma, 2017). The participant is instructed to keep a direct gaze on the dot while the slider descends and to report when the dot's image breaks into two. The cut-off point for NPC break and NPC recovery is between 5 and 7 cm respectively (Pang et.al., 2010) &#13;
Adult ADHD Self Report Scale (ASRS-v1.1; Kessler et al., 2005) &#13;
The severity of ADHD symptoms presented by individuals with ADHD was assessed using the ASRS. The ASRS is an 18-item checklist, developed by the World Health Organization (WHO) work group together with the WHO World Mental Health (WMH) Survey Initiative (Kessler et al., 2005), to screen ADHD in adult patients. Completion of the ASRS requires participants to indicate how much they agree that the given statement relates to their behaviour over the past 6 months. The questions are divided into 2 parts: part A and part B. Part A contains 6 questions that are indicative of symptoms consistent with ADHD and are used for screening purposes. A score of 4 or above denotes symptoms typical with ADHD. The final 12 questions in Part B provide a more detailed breakdown of the specific symptoms an individual is presenting. The scale has high concurrent validity, and the internal consistency of the scale Cronbach’s α was found to be 0.88 (Adler et.al., 2006).&#13;
Hospital Anxiety and Depression Scale (HADS) &#13;
Hospital Anxiety and Depression Scale was developed by Zigmond and Snaith in 1983. It is a 14-item measure, used to detect the psychological distress of the participants (Zigmond &amp; Snaith, 1983). Seven of the items measure anxiety (HADS-A), while the remaining seven measures the depressive symptoms (HADS-D). For each item, the participant is asked to indicate on a four-point scale the degree to which they feel a given statement relates to how they were feeling for the past week. The overall score for both anxiety and depression is 21. A score of 0-7 represents “normal”, 8-10 indicates “mild”, 11-14 “moderate and 15-21 indicates “severe” (Pais-Ribeiro et al., 2018). The scale is reliable and valid in measuring symptoms in both general and psychiatric patients (Bjelland et.al., 2002).  &#13;
&#13;
Eye-Tracking Measurement &#13;
Participants eye movements were recorded via the EyeLink Desktop 1000 at 500Hz. To minimize the head movements, a chin rest was used. Participants were seated approximately 55cm from the computer monitor (monitor run at 60 Hz). All the stimuli used for the study were created and controlled using Experiment Builder Software Version 1.10.1630. Two different computers are used for the eye-tracking system: a host PC which tracks the eye movements and determines their actual gaze positions and a display computer which shows the stimuli during the calibration and experimental trial.  &#13;
Calibration  &#13;
Prior to presenting the experimental stimuli participants completed a 4-point calibration to ensure the eye tracker was accurately tracking their eyes. During this trial, the participant will be asked to follow a red dot that will move to the four edges of a +.  &#13;
Prosaccade task &#13;
Participants were asked to complete 16 gap trials as quickly and accurately as possible. At first the participants were instructed to look at a fixation point to centre their gaze. It was a white target displayed at the centre of the screen for 1000ms. Then they were told to focus on the appearing the red lateralised target, presented randomly to the left or right of the screen at 4° (visual angle) for 1200ms. The temporal gap in stimuli presentation is due to a 200ms blank interval screen which was displayed between the fading of the white fixation stimuli and the initial appearance of the red target.  &#13;
Anti-saccade task &#13;
For anti-saccade task, the participants completed 24 gap trials with 4 practice trials. They were asked to look at the central white fixation presented for 1000ms before shifting their gaze and attentional focus to the opposite side of the screen from where the green target appeared. The green lateralised target was displayed randomly to the left or right side of the screen at 4° (visual angle) for 2000ms. There was a 200ms blank interval screen as a gap in between the fixation point and the target. &#13;
 Procedure &#13;
The study was approved by the Lancaster University Psychology Department Ethics Committee. Prior to study commencement healthy younger adult volunteers were randomly to either the healthy control or healthy feigner (asked to feign ADHD) group. All individuals with a formal clinical diagnosis of ADHD were assigned to the ADHD group. &#13;
The participants were required to visit the lab in order to participate. Before commencing the study, the participants provided informed consent. After taking the required demographic data, participants were then screened for the probable presence of mild cognitive impairment using the ACE-III. They were also screened for any visual impairments using the RAF rule and Ishihara colour blindness test. Then, the participants were asked to complete the HADS, to screen for any psychological distress. Additionally, the ADHD participants were asked to complete the ASRS questionnaire, to determine the severity of the disorder. &#13;
 On completion of the pre-study questionnaires, participants will be provided with Task information leaflet.  &#13;
At this time control and ADHD participants were presented with a vignette (Appendix B) detailing an individual trying to feign ADHD. Comparatively, those assigned to the feigning condition were presented with a vignette (Appendix C) that explained the symptoms of ADHD and were asked to imagine themselves in a situation where they were to feign ADHD. All participants were then asked to complete the two eye movement tasks and the associated calibration trials. Fundamentally, at this time healthy controls and those with ADHD were asked to complete the tasks honestly to the best of their ability. In comparison, those in the feigning condition were asked to complete these tasks whilst pretending to have ADHD (without any over-exaggeration). On completion of the tasks, all participants were informed that they will be entered into a lottery to win a £25 and were provided with a debrief sheet (Appendix H), which explains the details of the study.  &#13;
Data Analysis &#13;
DataViewer Software Version 3.2 was used to extract and analyse the raw EyeLink data. The data was then analysed online using a bespoke software SaccadeMachine. With the software spikes and noise were removed by filtering out frames with a velocity signal greater than 1,500 deg/s or with an acceleration signal greater than 100,000 deg 2 /sec. Fixations and saccadic events were identified using the EyeLink Parser, and the saccades were extracted alongside multiple temporal and spatial variables. Trials were eliminated when the participant did not direct their gaze on to the central fixation. The temporal window of 80-700ms used and measured from the onset of the target display. Anticipatory saccades made prior to 80ms, and excessively delayed saccades made after 700ms were removed. The data thus formed consists of the latency and error rate. Latency is the time taken of the correct trial whereas the error rate is the percentage of trials the participant got wrong. Data of one individual participant from the control group was removed as their ACE score was low suggesting the probable presence of mild cognitive impairment. Due to the lack of a formal diagnosis, data of an ADHD participant was removed.  &#13;
All data was then assessed to ensure it met the assumptions required for statistical analysis. First, all data was assessed for the presence of any outliers (+/- 2SD). This analysis revealed there were 3 outliers for the both the pro- and anti-saccade measures. Given that these outliers may skew the subsequent analysis, all outliers were removed. The subsequent data was then checked to ensure it met the assumptions of normality. It was found that the prosaccade latency satisfied the normality condition (see Figure 1), hence one-way ANOVA was applied to investigate the difference in latency across the groups. As the data for prosaccade error rate was skewed (see Figure 2), Kruskal-Wallis H Test was used to determine the difference in data across the groups. Removing the outliers gave a data which satisfied normality condition for both anti-saccade latency (see Figure 3) and error rate (see Figure 4). Hence one-way ANOVA was used to test the difference for both the data across the groups and a post hoc Tukey’s Honest Significant Difference test was used to determine the significance of the difference in anti-saccade latency. &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3121">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3122">
                <text>SPSS.sav for results&#13;
Word.doc for demographic and data acquistion form&#13;
PDF for consent form</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3123">
                <text>George_2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3124">
                <text>Lettie and Delyth</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3125">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3126">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3127">
                <text>Data and Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3128">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3134">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3195">
                <text>Dr Megan Rose Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3196">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3197">
                <text>Clinical&#13;
&#13;
Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3198">
                <text>38</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3199">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="153" public="1" featured="0">
    <fileContainer>
      <file fileId="169">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c32bb813b138e5706ec76bb2e9c3a7b3.doc</src>
        <authentication>f4062334d78cf5f0c54a8646bfb0feb2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3150">
                <text>Grasping Ability in Virtual Reality: Effects of Eating Disorders on Perceptions of Action Capabilities</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3151">
                <text>Siri Sudhakar</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3152">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3153">
                <text>Knowledge of one’s body size is vital to be able to accurately judge an object’s size. For example, knowing the length of your arm is crucial to estimating the maximum distance reachable. Accurate perception of action capabilities is the result of a healthy mental body representation at a conscious and implicit level. This ability to use one’s mental body representation in action perception is assumed to be distorted in individuals with eating disorders (ED). However, unlike prior research, this study will be investigating both the effect of body image and schema distortion on action capabilities. Thus, this study will assess whether the ability to update one’s perception of their action capabilities in response to morphological changes is altered in individuals with EDs. The experiment had participants (N = 20) embody small (50% of hand size), normal, and large (150% of hand size) avatar hands (in virtual reality) and then estimate the maximum size of a box graspable. The size of the box, beginning as either large or small across all three conditions, was manipulated to observe haptic perception in participants. We found that individuals with ED showed similar estimates despite embodying different hand sizes alluding to their inability to successfully update their haptic perceptions. Low interoceptive awareness and body image disturbances were the root cause of this perceptional flaw in eating-disordered individuals. Treatment focused on improving the altered IA and implicit distortions in body schema could improve haptic perception in ED individuals.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3154">
                <text>Action Capability, Eating Disorder, Interoceptive Awareness</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3155">
                <text>A priori power analysis was conducted through the G*Power software (Faul et al., 2007) to determine the sample size required to achieve adequate power (N = 30). The required power (1- β) was set at .80 and the significance level (α) was set to .05. Based on Readman et al. (2021), who used the same methodology as this study, we anticipated a large&#13;
effect size of 0.9. This was deduced as this study obtained a ηp2 of .49 with a sample of N =30. For the frequentist parameters defined, a sample size of N = 3 is required to achieve a power of .80 at an alpha of .05.&#13;
EDs are also notoriously variable. Given that previous studies using similar methodologies have typically recruited between 20-30 participants (Readman et al., 2020; Lin et al., 2020), we elected to recruit 30 participants (15 per condition). However, this study was only able to recruit 23 participants in total.&#13;
22 participants from Lancaster and Lancaster University (seven males, 15 females) aged between 18-30 (Mage = 21.73, SDage = 1.98) participated in this study. Two participants were removed due to being extreme outliers resulting in the present dataset (N = 20; Mage = 21.65, SDage = 2.06).&#13;
Amongst participants of this study, seven participants disclosed a diagnosis of ED. In accordance with the revised Edinburgh Handedness Inventory (R-EHI) classification system (Milenkovic &amp; Dragovic, 2013), the majority of the participants (N = 19) were right-handed, with only one participant being left-handed. Borderline to high levels of anxiety, as measured through the Hospital Anxiety and Depression Scale (HADS; Stern, 2014), was observed in 16 participants, while seven participants showed similar levels of depression.&#13;
Eating Disorder Inventory (EDI): Participants with ED were also asked to complete the EDI. It is a self-report questionnaire that can assess the presence and level (depending on the estimate) of AN, BN, and Binge Eating Disorder (BED) (Augestad and Flanders, 2002). It consists of 64 items, with eight subscales measuring dimensions such as drive of thinness, body dissatisfaction, perfectionism, interpersonal distrust, and IA (Garner, Olmstead, &amp; Polivy, 1983; Vinai et al., 2016; Santangelo et al., 2022). Seven participants had ED while the remaining formed the healthy control group.​&#13;
Design&#13;
This study includes variables in a 2 (Between factor: Group – Control vs. ED) x 3 (Within: Hand size – small vs. normal vs. large) factorial design. The dependent variable (DV) is the grasping ability, and the independent values are the groups involved and the hand size conditions. All participants of each group experienced all conditions of the hand size. The order of condition completion was randomised across participants through use of a Latin square method. Such counterbalancing allows for the control of confounding/extraneous variables and diminishes order and sequence effects, improving internal validity (Corriero, 2017).&#13;
Stimuli and Apparatus&#13;
Participants were seated an arm’s length away from the front of a standardized table. Unity 3D© Gaming Engine with the Leap motion Plugin was used to create a virtual environment in 3D VR colour. Participants were able to view this environment through an Oculus Rift CV1 Head Mounted Display (HMD). The HMD displayed the stereoscopic environment at 2,160 × 1,200 at 90 Hz split (Binstock, 2015). Head and hand movements were tracked in real-time by the HMD and the Leap motion hand-tracking sensor attached to the HMD.&#13;
The HMD ensured that the participants’ perspective was updated in real-time. Hand movements were updated in accordance with the virtual hand that was mapped onto the participant’s natural hands. Leap Motion for Unity provided assets such as avatar hands based on actual human hands. The virtual environment was visible to the participants in a first-person perspective adjusted to their height. The VR display is comprised of a model room, with a table located in the middle. Upon this table were either two white dots (Calibration trials) or a white box (Test trails).&#13;
 &#13;
 &#13;
Questionnaires&#13;
Revised Edinburgh Handedness Inventory (R-EHI). Participants’ handedness was deduced using the R-EHI. The modified version of the inventory was used as it accounted for and improved the inconsistencies and validity compared to the past questionnaire (Milenkovic &amp; Dragovic, 2013). Participants are estimated on handedness depending on their preferences of either hand for doing activities such as writing, drawing, throwing a ball, etc.&#13;
Hospital Anxiety and Depression Scale (HADS). The HADS questionnaire was also provided to all participants to assess the presence of borderline or abnormal levels of anxiety and depression in them. It is a quick questionnaire consisting of seven questions each for anxiety and depression, with both being scored separately (Stern, 2014).&#13;
Procedure&#13;
Participation in this study took up to an hour of the participant’s time. It was conducted in the Whewell Building of Lancaster University. Participants were recruited partly through opportunity sampling, and advertisements. All participants received £5 for their contribution to this study. All participants were native English speakers, had normal or corrected vision, and had no motor difficulties. Participants provided informed consent through a consent form signed before the onset of the study. They were also provided a debrief sheet and were verbally debriefed at the end of the experiment.&#13;
The methodology of this study mirrors that of Readman et al. (2021). The experiment was conducted in a virtual environment (VE) through a VR device. The inclusion of VR allows for controlled changes to grasping ability, with responses collected similar to how an individual would act in the real world (Normand et al., 2011). Moreover, the inclusion of VR enabled interactions with the morphologically altered virtual body in real-time, and in a similar physical environment through the immersive system built through the head-mounted displays (HMD) and motion sensors (Gan et al., 2021).&#13;
Participants completed the R-EHI, EDI, and HADS questionnaires before beginning the experiment. Participants were asked to don the HMD and introduced to the virtual environment through a brief demonstration. They were given approximately 5 minutes to explore the environment, to familiarise themselves with the immersive VR experience and ensure no undue effects occur. Participants completed three experimental conditions: Normal hand size, constricted hand size (50% of their hand size), and extended hand size (150% of their hand size). Each condition consisted of calibration and test trials.&#13;
Calibration trials. Participants were presented with the virtual table upon which two horizontally spaced dots were located. Using their dominant hand, participants were asked to touch the left-most dot with their left-most digit and then touch the right-most dot with the right-most digit of their dominant hand. This occurred for 30 trials to ensure that the participant has habituated to the virtual hand.&#13;
Test trials. The participants were instructed to place their hands behind their backs, out of sight. The Leap Motion sensor was then temporarily paused to ensure that the virtual hands are not visible to the participants. On ensuring this position, participants were then presented with a block in the VE, that they had to envision they could grasp with their dominant hand from above. The size of the block was manipulated, making it either larger or smaller, with each alteration causing 1cm changes. The participant was asked to tell the researcher when the block reflects the maximum size that they would be able to grasp. The final size was saved before the participant was presented with another block.&#13;
Grasping was defined to participants as the ability to place their thumb on one edge of the block and extend their hand over the surface of the block and place one of their fingers on the parallel edge of the block. This grasp was also demonstrated to participants. Participants completed four test trials; in two test trials, the block started small (0.03 cm) and was made larger. In the remaining two trials the block started large (0.20 cm) and was made smaller. This was done to omit the hysteresis effect, which would cause prior visual stimuli to influence later perception (Poltoratski &amp; Tong, 2014). Therefore, four grasp-ability estimates were obtained for each experimental condition.&#13;
This study received ethical approval from Lancaster University Psychology department.&#13;
 &#13;
Data Analysis&#13;
An Analysis of Variance (ANOVA) is a statistical model used to examine differences in means (Rucci &amp; Tweney, 1980). The present dataset contains both between-subjects (group) and within-subjects (hand size) factors. Thus, a mixed ANOVA would allow us to compare these variables and the means of the groups they are cross classified with.&#13;
This is a two-way analysis as there are two independent variables (group and hand size) but only one DV (grasping ability estimate). Analysis through ANOVA is appropriate for this dataset as the effect of both variables in this study can be studied on the response estimate (Field, 2009). This study aims to establish the effect of group and hand size on grasping ability (GA). Therefore, a mixed ANOVA would help us identify the significant effect of either factor on the GA estimate and examine their interaction effect. Results of the mixed ANOVA analysis would help assess whether individuals with ED do update to changes in morphology.&#13;
Data Preparation&#13;
The present dataset combined demographic, physicality, and questionnaires related (EDI, R-EHI, HADS) information and GA estimates across the hand size conditions (small vs normal vs large). GA estimate of each condition was further sub-categorized into whether the box started large or small with four trials each. Averages of these four trials for the small starting box and large starting box for each condition was taken forming the mean grasp-ability estimates (cm).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3156">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3157">
                <text>Data/excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3158">
                <text>SUDHAKAR2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3159">
                <text>Alexia Hockett &#13;
Romina Ghaleh Joujahri</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3160">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3161">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3162">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3163">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3164">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3255">
                <text>Dr. Megan Rose Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3256">
                <text>MSc </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3257">
                <text>Cognitive, Perception </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3258">
                <text>20</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3259">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="154" public="1" featured="0">
    <fileContainer>
      <file fileId="172">
        <src>https://www.johnntowse.com/LUSTRE/files/original/3ddc0d86634b8437530ec3352beb2ebc.pdf</src>
        <authentication>1ad80421bc21a8ecbaac8b6704bb657f</authentication>
      </file>
    </fileContainer>
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3165">
                <text>Levodopa and antisaccade performance in Parkinson’s disease: the influence of intrinsic dopaminergic functioning, dopamine agonists and chronic anti-parkinsonian medication </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3166">
                <text>Amy Austin</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3167">
                <text>14th September 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3168">
                <text>The antisaccade (AS) task is a validated eye-tracking paradigm primarily used to assess response inhibition. Although several studies have established AS error rate and latency to be increased in Parkinson’s disease (PD), the evidence regarding the effect of existing anti-parkinsonian medication (e.g., levodopa) on these parameters is contradictory. According to the dopamine overdose hypothesis (DOH), the effect of levodopa on AS performance should be dependent upon the intrinsic dopaminergic functioning of the individual. The current study is the first study to use spontaneous eye blink rate (SEBR), a proxy measure for dopamine activity, to investigate the influence of intrinsic dopaminergic functioning on AS performance following levodopa consumption. The influence of additional PD related factors was also examined. SEBR and AS performance was assessed in eleven healthy controls (HC) and nine participants with PD. SEBR and AS performance was assessed twice in participants with PD, once 30 minutes prior to, and once one hour after, the consumption of levodopa. Pre-levodopa consumption SEBR was a significant positive predictor of AS error rate post, but not pre, levodopa consumption. Total years consuming anti-parkinsonian medications was positively predictive of AS error rate both pre and post levodopa consumption. The regular consumption of dopamine agonists was found to significantly predict fewer AS errors following the consumption of levodopa. The current results support the DOH; higher intrinsic dopaminergic functioning was associated with increased AS errors following the artificial stimulation of dopamine via by levodopa. Therefore, artificial dopaminergic stimulation of an intrinsically sufficiently functioning dopaminergic system appears to produce an overstimulation/overdose effect whereby consequential detrimental effects on AS performance/response inhibition are observed. The current findings go some way in explaining the inconsistencies within the literature. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3169">
                <text>Keywords: Parkinson’s disease, dopamine overdose hypothesis, spontaneous eye blink rate, levodopa, dopamine agonists, antisaccade </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3170">
                <text>Twenty-one participants, 10 individuals with mild-moderate idiopathic PD (Mage = 67.10, SDage = 8.63) and 11 healthy control older adults of comparable age (HC; Mage = 66.82, SDage = 9.09) were recruited to the study. The mean age of recruited HC and PD individuals did not differ significantly, t (18.95) = - 0.07, p = .943). Participants were recruited via established research databases and via the social network of the researcher. As the current study focused on PD, participants with a diagnosis of any neurological conditions (beyond PD) were excluded. Additionally, as depression and anxiety influence an individual’s saccadic performance profile and SEBR (Jazbec et al., 2005; Mackintosh et al., 1983), individuals who obtained a clinically moderate depression or anxiety score, as measured by the Hospital Anxiety and Depression scale (HADS), were excluded. Similarly, mild cognitive impairment (MCI) and dementia are associated with increased AS error rate and AS latency (Opwonya et al., 2022), and increased SEBR (D’Antonio et al., 2021). As such, those who presented a cognitive profile indicative of MCI/dementia (score &lt; 82 on the Addenbrookes Cognitive Exam-III, ACE-III; Hsieh et al., 2013) were excluded from the current study. Finally, as experimental stimuli in the current study were coloured red and green, individuals with red-green colour vision deficiency, detected via the Ishihara test (Ishihara, 1917) were also excluded. &#13;
On these grounds of exclusion, one individual with PD was excluded from the current study due to obtaining an ACE-III score indicative of MCI. Subsequently, nine individuals with mild-moderate idiopathic PD (Mage = 65.89, SDage = 8.21) and eleven HC individuals (Mage = 66.82, SDage = 9.09) participated in the study. All participants had normal or corrected to normal vision. &#13;
All participants with PD were classified as Hohen and Yahr stage II or below (Hoehn &amp; Yahr, 1998), indicating they were physically independent and capable of completing all study tasks. At the time of testing, all PD participants were receiving anti-parkinsonian medication (see table 2 for PD sample anti-parkinsonian medication summary). All PD participants were tested under their normal medication regime, that is, participants attended the study 30 minutes prior to the consumption of their next, normally scheduled, dosage of levodopa-based medication. Accordingly, measures were obtained both pre (30 minutes prior) and post (1 hour after) levodopa consumption, permitting the respective investigations of pre and post levodopa consumption SEBR, motor symptom severity, AS performance and PS performance. &#13;
An online calculator computed the levodopa equivalent daily dosages (LEDD) for each participant with PD. LEDD indicates the equivalent amount of levodopa an individual receives from all anti-parkinsonian medications across a 24-hour window (Julien et al., 2021). The online calculator can be accessed via: https://www.parkinsonsmeasurement.org/toolBox/levodopaEquivalentDose.htm &#13;
Materials and measures &#13;
Online questionnaire &#13;
A questionnaire comprised of a demographics and health screening survey, the Edinburgh handedness inventory (EHI), the HADS, and a PD and associated medication &#13;
survey was developed and distributed via Qualtrics (Qualtrics, 2013). The questionnaire required 15 minutes to complete. &#13;
Demographics and health screening survey. Participants were asked to disclose key demographic and health information (e.g., age, sex, whether they had normal or corrected to normal vision). Participants were also asked to disclose any history of visual impairments, neurological conditions (beyond PD), psychiatric illness, or rheumatic illness. &#13;
The EHI (Oldfield, 1971). The EHI is a highly reliable (r = .97, p &lt; .001; Oldfield, 1971) and internally consistent (a = 0.88; Oldfield, 1971) self-report measure of an individual’s hand dominance (Edlin et al., 2015). Participants are requested to indicate their typical hand preference, via five-point Likert scales ranging from ‘always left’- ‘always right’, when completing a range of daily activities (e.g., writing). A final score of ≥ 50 indicates right hand dominance, &lt; 50 to &gt; −50 indicates ambidexterity, and ≤−50 indicates left hand dominance. As hand dominance typically corresponds to ocular dominance (McManus et al., 1999), the EHI was used to infer the dominant eye of each participant in the current study. Monocular eye tacking was then conducted on the dominant eye (Ehinger et al., 2019). &#13;
The HADS (Zigmond &amp; Snaith, 1983). The HADS is a short self-assessment questionnaire validated to detect anxiety and depression within the general population, inclusive of the elderly (Bjelland et al., 2002). Respondents are required to indicate, via four- point Likert scales, how 14 items relate to their recent feelings. Responses range from ‘0’ (the item has little relevance to recent feelings), to ‘4’ (the item is significantly representative of recent feelings). Likert responses are summed separately for anxiety and depression relevant items. Scores of seven or less indicate no notable presence of anxiety and depression. Scores ranging between eight and 10 indicate mild levels, between 11 and 14 indicate moderate levels, and between 15 and 21 indicate severe levels. &#13;
PD and associated medication survey. Individuals with PD were asked to disclose further health information regarding the number of years since their PD diagnosis, which anti-parkinsonian medications they were currently receiving, the daily dosages of these medications and the total number of years they had been consuming anti-parkinsonian medications. &#13;
ACE-III (Hsieh et al., 2013) &#13;
The ACE-III is a well validated (Hseih et al., 2013), highly reliable and internally consistent (ICC = 0.92, a = 0.87 respectively; Takenoshita et al., 2019) cognitive assessment used to screen for the presence of MCI and dementia syndromes (Hsieh et al., 2013). To provide a global neuropsychological evaluation, participants are asked to complete tasks assumed to relate to five principal cognitive functions, namely: memory, language, attention, visuospatial skills, and verbal fluency (Hodges &amp; Larner, 2017). Scores ascertained from each of the five domains are summed and the individual receives an overall score relative to the maximum possible score of 100. Higher scores indicate better cognitive functioning. A score below 82 is indicative of cognitive impairment. &#13;
Ishihara colour deficiency test (Ishihara, 1917) &#13;
The Ishihara colour deficiency test is a 38-item assessment of red-green colour perception. Typical red-green colour vision is marked by the ability to correctly decipher a number or pattern embedded within 38 red/green circular images. The test requires three minutes to complete. &#13;
MDS-UPDRS (Goetz et al., 2008) &#13;
Both motor and non-motor PD symptoms were evaluated using the MDS-UPDRS. The MDS-UPDRS is comprised of four distinct subscales. Subscale I focuses on non-motor symptoms associated with PD (e.g., cognitive impairment, dopamine dysregulation syndrome), whereas subscales II – IV focus on the motor symptoms associated with PD. Subscales I, II and IV require participants to retrospectively respond with answers reflecting their average symptoms/experiences over the previous week. Whereas subscale III directly assesses current functioning via a motor exam. The motor examination requires participants to perform a series of motor tasks (e.g., finger tapping, walking, arising from a chair) under the observation of the examiner. The examiner rates the severity of motor impairment displayed during each motor task performed. All subscales of the MDS-UPDRS are scored according to four-point-Likert scales whereby ‘0’ indicates no impairment and ‘4’ indicates the most severe impairment. Hoehn and Yahr (Hoehn &amp; Yahr, 1998) stages were calculated based upon the MDS-UPDRS assessment. The accumulative score of subscales I, II, III and IV provide an overall MDS-UPDRS score indicative of PD severity. A maximum score of 199 is reflective of the most severe disability the result of PD (Holden et al., 2018). The MDS-UPDRS requires approximately 30 minutes to complete. &#13;
SEBR &#13;
SEBR was assessed by recording participant’s eye movements whilst sitting at rest. The recording device was located approximately 55cm directly in front of the participant. Participants were not informed that they were completing an assessment of their blink rate, nor were they engaged into conversation with the examiner as both informing participants that their blink rate is being assessed and conversing increase SEBR (Doughty, 2001). Participants eye movements were recorded for two-and-a-half minutes however, only the last one minute of each recording was coded for SEBR (one minute is sufficiently long enough to obtain a representative blink rate, Deuschl &amp; Goddemeier, 1998). A blink was identified (and coded accordingly) as full eye lid closure which was the result of bilateral movement of the eyelids (Kimber &amp; Thompson, 2000). SEBR was scored as the number of blinks per minute. PD participant pre-levodopa consumption SEBR was considered their baseline SEBR, reflective of intrinsic dopaminergic functioning (Kimber &amp; Thompson, 2000). &#13;
Eye tracking tasks &#13;
Apparatus &#13;
A desktop mounted eye tracker (Eyelink Desktop 1000), operating in monocular mode, with a sampling rate of 500 Hz was used to record eye movements of the participant’s dominant eye. An adjustable chin rest with attached forehead rest was utilized to minimise head movements. The eye tracking camera was located at the base of the stimuli presenting computer monitor. Participants sat approximately 55cm away from the eye tracking camera and computer monitor. A 4-point calibration, whereby participants are asked to fixate upon a red circle as it moves from the top, bottom, right and left side of the computer screen, was used prior to the commencement of all eye tracking tasks. Frequent calibration improves the accuracy of eye-tracking data (Pi &amp; Shi, 2019). All eye tracking tasks were developed and operated using experiment builder software version 1.10.1630. Habitual eye glass wearers were not required to remove their eyeglasses during eye tracking tasks. Eye tracking tasks required approximately 10 minutes to complete. &#13;
Prosaccade task &#13;
Participants completed four practice trials and 16 experimental gap trials. To centre a participant’s gaze at the start of each trial, a white fixation stimulus was presented for 1000 milliseconds (ms) in the centre of a back computer screen. A red lateralised target was then displayed randomly either to the right or the left of the central fixation for 1200ms at 4 ° eccentricity. The PS task operated according to the gap paradigm. Accordingly, to create a temporal gap between fixation and target stimuli, a black interval screen was presented for 200ms between the extinguishing of the white fixation stimulus and the presentation of the red target stimulus. For the PS task, participants were instructed to shift their visual focus towards the location of the red target as quickly and as accurately as possible. &#13;
Antisaccade task &#13;
Participants completed four practice trials followed by 24 experimental gap trials. Participants were presented with a white central fixation stimulus on a black computer screen for 1000ms. Following a 200ms black interval screen, a green lateralised target stimulus was presented at random to either the left or right of the central fixation. The green target was displayed for 2000ms at 4 ° eccentricity. Participants were instructed to shift their visual focus to the opposite direction of where the green target stimulus appeared. An example of a successful trial would be as follows, if the green target stimulus was presented left-lateralised, participants should direct their gaze to the right side of the computer screen. &#13;
Procedure &#13;
The present study was reviewed and approved by Lancaster University’s ethics committee. All participants provided informed consent prior to participating. &#13;
Participants were tested on one day and testing sessions took no longer than two hours. Individuals with PD completed SEBR assessments, MDS-UPDRS III motor examinations and all eye tracking tasks twice, once 30 minutes prior to consuming their usually scheduled dosage of levodopa medication, and once again one hour following the consumption of their levodopa medication. Prior research indicates that one hour is sufficient for levodopa to be metabolized and produce therapeutic effects (Lu et al., 2019). This method of testing the effect of anti-parkinsonian medications is widely used within the literature and no detrimental effects of this method have been reported (Cools et al., 2003). Similarly, re- test on the PS and AS tasks does not significantly influence performance (Larrison-Faucher et al., 2004). HC participants completed all study tasks once. &#13;
All participants completed the online questionnaire 48 hours prior to attending testing sessions. Upon arriving to testing, all participants completed an assessment of SEBR followed by the PS and the AS tasks. HC participants then completed the ACE-III and the Ishihara test. HC participation in the study was then complete. PD participants continued with further testing. Specifically, PD participants then completed the MDS-UPDRS subscale III motor examination. PD participants then consumed their usual dose of levodopa medication at their usual time. During the one-hour levodopa metabolization period, participants with PD completed subscales I, II and IV of the MDS-UPDRS, the ACE-III and the Ishihara test. &#13;
Once one hour had elapsed, individuals with PD then re-completed an assessment of SEBR, the PS and the AS task, and were also re-assessed via the MDS-UPDRS subscale III motor examination. Thus, motor symptom severity (MDS-UPDRSIII), SEBR and eye- tracking data were obtained from both pre (baseline) and post levodopa consumption medication states. &#13;
Data processing &#13;
Raw data were extracted via EyeLink using DataViewer Software Version 3.2 and processed offline using the bespoke software SaccadeMachine (Mardanbegi et al., 2019). SaccadeMachine removes noise and spikes within the data; frames with a velocity signal greater than 1500 deg/s or with an acceleration signal greater than 100,000deg2/sec are filtered out. Fixations and saccadic events were detected via the EyeLink Parser. Trials were excluded where participants failed to direct their gaze to the central fixation stimulus. To ensure saccadic data were reflective of responses to target presentation, a temporal window of 80-700ms from the initial onset of the target stimulus was used (i.e., anticipatory saccades produced prior to 80ms, and excessively delayed saccades produced after 700ms were excluded). The following variables were extracted from the processed data: PS latency (the time taken between the onset of the target stimulus and the first correct fixation), PS error rate (the number of times the participant failed to generate a reflexive saccade to fixate upon the target stimulus), AS latency (the time taken between the onset of the target stimulus and the first correct fixation in the opposite direction to the target stimulus), AS error rate (the number of times a participant erroneously performed a reflexive PS towards the novel target stimulus instead of looking away). </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3171">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3172">
                <text>Data/R.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3173">
                <text>Austin 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3174">
                <text>Rachel Jordan&#13;
Sian Reid</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3175">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3176">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3177">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3178">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3179">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3403">
                <text>Dr Megan Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3404">
                <text>Msc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3405">
                <text>Neuro-clinical psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3406">
                <text>20 (9 individuals with mild-moderate Parkinson's disease, 11 healthy control individuals of similar age)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3407">
                <text>Regression, T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="155" public="1" featured="0">
    <fileContainer>
      <file fileId="161">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8154f97af93267514bfb20a6c3f3ef81.doc</src>
        <authentication>d960205f74b85b3da78afddb4fda542d</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3180">
                <text>Farmer and Non-Farmer Attitudes towards Alternative Animal Products</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3181">
                <text>Chloe Crawshaw</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3182">
                <text>23/09/22</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3183">
                <text>Farmers’ livelihoods and way of living could be argued to be under threat from the simultaneous rapid rise of plant-based products, development of cultured products, and our growing understanding of the detrimental impact of traditional animal agriculture. Little research has investigated farmers attitudes towards cultured and plant-based products. Furthermore, famers appear to have limited awareness of these animal product alternatives. This study presented 45 omnivorous farmers and 53 omnivorous non-farmers with information about plant-based burgers, cultured burgers, plant-based milk, and cultured milk. Product acceptance and COM-B facilitators and barriers were explored. Farmers were less accepting of all alternative products than non-farmers, suggesting that their vested interest in the continuation of traditional animal agriculture affected their attitudes towards alternative products. Closer inspection of farmer acceptance suggests that personal investment in animal agriculture also led to differences within farmers, with occupational farmers being less accepting of the products than the members of farming families. The findings are interpreted using the Transtheoretical Model to suggest that regarding the adoption of alternative products, occupational farmers appear to be in the rejection stage, whereas members of farming families appear to be in the contemplation stage. As occupational farmers had more negative attitudes towards the alternative products, they appear more likely to consider the alternatives a threat to their livelihood.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3184">
                <text>farmers, plant-based alternatives, cultured products, COM-B Model, Transtheoretical Model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3185">
                <text>Participant Recruitment and Exclusions&#13;
Participant recruitment followed a pre-registered plan (https://aspredicted.org/blind.php?x=QL3_H96). Between July and August 2022 two groups of participants were recruited: adults with experience of livestock farming (Farmers), and a comparison group of adults without experience of livestock farming (Non-Farmers). Farmers make up a very small percentage (0.2%) of the UK population (DEFA, 2021) so we included current farmers, retired farmers, farm workers, and members of farming families.  &#13;
Fifty-five livestock farmers predominately living in Gloucestershire were recruited using snowball sampling. Farmers that were known to the author were first contacted via telephone, social media, or visited in-person. Interested participants were provided with the URL link to the questionnaire, a brief description of the study, and a request to forward the information to other individuals in the farming community. Individuals without internet access received a paper copy of the questionnaire. &#13;
Sixty-one non-farmers were recruited through snowball sampling in the same method as for farmers. As farmers are typically older males (DEFRA, 2019), we attempted to match the ages of the non farmers to the farmers and effort was taken to recruit female farmers and members of farming families. Our recruitment plan was to recruit a minimum of 40 participants per group. To qualify for the study, farmers and non-farmers had to be omnivores. &#13;
A further 23 farmers and 10 non-farmers were recruited using Prolific by pre-screening for those in the ‘Agriculture, Food, and Natural Resources’ employment sector, the description of the study also encouraged participation among those with “experience of working with farmed animals.” &#13;
A total of 130 participants consented to participate: 55 farmers, 61 non-farmers, and a further fourteen who were excluded as they did not reach the demographics section so could not be classified into a group. Following our preregistered exclusion criteria, 18 participants who reported dietary restrictions were excluded (10 Farmers and 8 Non-Farmers). The final sample consisted of 45 Farmers and 53 Non-Farmers. &#13;
Design and Procedure &#13;
A 2x4 mixed design was used, with Group as a between-subjects factor with two levels: Farmer and Non-Farmer, and Product type as a within-subjects factors with four levels: plant-based burgers, cultured beef burgers, plant-based milk, and cultured cow’s milk. Participants completed an online questionnaire on Qualtrics (Qualtrics, 2005) that “drew attention to existing and emerging food innovations and explored beliefs and attitudes towards these products “, see Appendix A. The questionnaire took approximately 15 minutes. &#13;
Ethical Statement &#13;
The study was approved by Lancaster University’s Department of Psychological Ethics Committee. Participation was anonymous and Farmers were not asked to disclose the name or location of their farm. All participants gave their informed consent before accessing the questionnaire. On completion of the questionnaire, participants were debriefed, reminded of their right to withdraw their data, and were thanked.&#13;
Materials&#13;
	The questionnaire comprised of six sections: vignettes, product acceptance, facilitators and barriers to product acceptance, consumer behaviour, demographics, and farming information.  &#13;
Vignettes&#13;
Participants were presented with a brief description of factory farming, including its prevalence in the UK and the negative consequence on farmed animals and the environment. See Appendix B for full vignette details and references. Factory farming was chosen as it is the main method of farming in the UK (FAIRR, 2016). Participants were then presented with brief descriptions of plant-based products and methods of creating cultured animal products. Product features were compared against traditional animal products, including the sensory qualities, nutritional content, animal involvement, and environmental impact. Using a similar table to Van Loo et al. (2020), participants were presented with a comparison of the relative environmental impact of a plant-based soya burger and a cultured beef burger compared to a factory farmed beef burger</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3186">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3187">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3188">
                <text>Crawshaw2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3189">
                <text>HanYi Wang&#13;
Amie Suthers</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3190">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3191">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3192">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3193">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3194">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3210">
                <text>Dr Jared Piazza</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3211">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3212">
                <text>Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3213">
                <text>98(45 Farmers and 53 Non-Farmers)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3214">
                <text>Chi-squared&#13;
Correlation&#13;
Kruskall-Wallis, MANOVA, Wilcoxon Signed Rank, Mann-Whitney U </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="156" public="1" featured="0">
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3205">
                <text> Dr Megan Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3206">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3207">
                <text>Neuro-clinical psychology </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3208">
                <text>20</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3209">
                <text>T-test and regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="158" public="1" featured="0">
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3220">
                <text>Exploring the Effectiveness of Metaphors in Video Advertising - the Interaction Effect of Different Cultural Groups and Different Metaphors </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3221">
                <text>Lesley Wu</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3222">
                <text>7th September 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3223">
                <text>Metaphors are often used in contemporary advertising, and previous research has confirmed that advertisements with metaphors are more effective than literal ones. At the same time, research into the role of metaphors has become more comprehensive, moving from traditional metaphor theories based solely on literal language to the study of the interactive effects of different modalities of metaphor (multimodal metaphor). The aim of this study was to understand the differences in the responses of different cultural groups when exposed to advertisements containing different types of metaphors (needs-highlighting metaphor vs. feature-highlighting metaphor). Based on this expectation, a 2 (cultures: British, Chinese) x 3 (advertisement types: feature-highlighting metaphors, needs-highlighting metaphors, and literal advertisements) designed experiment was conducted to test.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3224">
                <text>Marketing&#13;
Psycholinguistics</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3225">
                <text>Design &#13;
To obtain statistics on the extent to which creative metaphor in video advertising contribute to the effectiveness of advertisement, a quantitative research method was used in this study. To test if there was an interaction effect between cultural group and metaphor types, this experiment had a 3×2 mixed design, with a within-subjects factor of advertisement type (feature-highlighting metaphors, needs-highlighting metaphors, and literal advertisements), and a between-subjects factor of participants’ culture (British and Chinese). The dependent variables were attitude toward the product/advert and purchase intentions.  &#13;
Participants &#13;
Fifty-three participants were recruited through convenience sampling and participated in the study by completing an online survey. The responses obtained from participants who either did not complete the consent form or did not answer all the questions were excluded from the analyses. This led to a total of 40 responses retained: 20 from Westerners, 10 men and 10 women; 20 from Chinese participants, 9 men and 11 women. Table 1 provides an overview of the participants’ information in this experiment. Most of the participants are currently studying at Lancaster University, some of the Chinese participants are currently living in China. As the aim of the experiment was to look at cultural differences, therefore, no specific age restrictions were set.   &#13;
Materials &#13;
In the current experiment, the selection of stimulus classification conditions was based on the setting of Pan's study in 2020. However, in order to investigate the pattern and consistency of people's responses under different conditions, the number of stimuli under each condition classification was larger in this experiment. The stimuli consist of 9 video ads in total: 3 ads each for the literal, feature-highlighting metaphor, and needs-highlighting metaphor conditions. All ads featured tangible products: perfume, body wash and deodorant, with 3 ads per product covering all 3 conditions. The experimental manipulation was based on the metaphorical dimension of the advertisements. Table 2 provides an overview of advertisement conditions and the links to view them.  &#13;
The metaphor condition contained at least one metaphor in the stimuli, while the literal advertisement was used as a control condition. The length of the selected advertisements was controlled to be less than 120 seconds (about 2 minutes). Advertisements that have been created in recent years were chosen, between 2012 and 2021.  &#13;
Phau and Prendergast' study (2000) found that consumers associated the image of a brand with the image of its country of origin. In order to minimise the influence of consumers' previous perceptions of brand image, the advertisements chosen for this experiment were made for well-known brands, whose countries of origin were all developed countries, such as the USA, the UK and Japan. &#13;
Advertisements created from different countries were chosen; therefore, the language of the original advertisements were Chinese, English and Japanese. All advertisements were translated into Chinese and English with subtitles, which were checked by native Chinese speakers with undergraduate degrees in Japanese translation and English translation. As the video exceeds the size of the attachments that could be added to the Qualtrics questionnaire, the video advertisements with bilingual subtitles were uploaded to OneDrive and the link was added to the questionnaire for participants to view. All selected video advertisements were sourced from internet platforms. &#13;
To measure attitudes toward the ad and purchase intentions, questions were formulated based on questions previously used in marketing research (Jeong, 2008; Kim, Baek &amp; Choi, 2012; Pan, 2020). &#13;
Attitudes towards advertisement. Participants were asked to rate/evaluate the ad on 4 scales, i.e., to what extent they agreed that the ad is ‘good’, ‘favourable’, ‘pleasant’, and ‘appealing’; the scales ranged from 1 (Strongly disagree) to 7 (Strongly agree) (Jeong, 2008).  &#13;
Purchase intentions. Participants were asked to rate the value of the item being promoted, the probability of purchasing the promoted product, and the probability of recommending the products to their family or friends (Maheswaran &amp; Meyers-Levy, 1990). &#13;
The original questions above were in English and were translated into Chinese for Chinese participants who took part in this study. The translations were checked for equivalence of meaning by a native Chinese translator researcher in English. Variables and measures in this study are provided in Table 3. &#13;
 &#13;
Procedure &#13;
All ethical guidelines related to data collection, and informed consent were reviewed and approved by the Faculty of Science and Technology Research Ethics Committee at Lancaster University. The data collected were anonymised upon extraction from Qualtrics. no participant information beyond the critical data is included. &#13;
All participants were asked to complete an online questionnaire. They could access the survey either via a QR code or via the shared link from Qualtrics. The questionnaire was set up on Qualtrics in English and Chinese versions. The first section included a participant information sheet and the consent form, followed by the experimental section.  &#13;
In this section, each video advertisement and the corresponding questions were grouped into a separate question block, each with a link to a specific advertisement for participants to view. This was to make sure participants focus on watching and evaluating one advertisement at a time. To move to the next block, participants had to complete the question evaluating the current video and press a button to access the next question block. Participants rated the properties of each advertisement immediately following exposure to it. The order of ads presented was fully randomised and differed for each participant. To prevent participants' overall liking of the advertised brand, product or brand spokesperson from influencing their assessment of each attribute of the advertisement and obtain valid data,  participants were reminded in each question block of the cautions for rating the advertisement itself with a sentence, "If you have any knowledge of the brands or products, please try to rate the following ads, by excluding your liking of them (including the celebrity spokesperson) and your current purchasing needs. " At last, participants clicked the submit button, and were debriefed and thanked for their participation. The study took approximately 40 minutes and participants were paid £6.50 for their time.  &#13;
Statistical analysis &#13;
The data was examined and analysed using SPSS software. A two-way mixed ANOVA (analysis of variance) was used to examine the two independent variables, i.e., advertisement condition, within-participants, with 3 levels (needs-highlighting metaphor, feature-highlighting metaphor, literal) and culture, between-participants, with 2 groups (Chinese, British), and their effects on two dependent variables, i.e., attitude towards the advertisement, and purchase intentions.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3226">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3227">
                <text>SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3228">
                <text>Wu2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3229">
                <text>Chrisie Pullin</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3230">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3231">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3232">
                <text>English and Chinese</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3233">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3234">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3235">
                <text>Francesca Citron</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3236">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3237">
                <text>Marketing&#13;
Psycholinguistics</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3238">
                <text>40</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3239">
                <text>Mixed ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="160" public="1" featured="0">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3260">
                <text>Assessing comprehension of health-related texts in non-native and native English speakers</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3261">
                <text>Khushboo Anup Agarwal</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3262">
                <text> 13/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3263">
                <text>Background — Health written materials are more often complex to comprehend if they mismatch the reading ability of people in the target audience. We need to consider how to make text accessible, by considering individual differences that affect comprehension of written health materials. Surprisingly, there are very few studies that indicate how non-native English speakers and native English speakers differ in comprehension of written health texts. Methods — A total of 557 participants were studied in the present study. In the study, participants were asked to respond to multiple-choice questions that were designed to examine understanding of 25 health texts with different text properties. Each participant responded to tests measuring individual differences in demographics, reading strategy, vocabulary, and health literacy.  Findings —   Using mixed effects logistic regression analysis, we found that non-native English speakers and native English speakers have different accuracy of responses for written health texts. Effects of vocabulary skills and text readability were significant. These effects were different for different language groups. Native speakers of English with higher scores on vocabulary were more likely to make correct responses to written health texts. Native speakers of English were more likely to make correct responses to written health texts as text readability increased. Conclusion — In future, experimental studies should look at the effects of training to improve vocabulary on reading comprehension for different language groups. Alongside consider sources of variances due to individual differences and text properties for different language groups.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3264">
                <text>reading comprehension, health literacy, individual differences, language groups.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3265">
                <text>Design&#13;
We conducted experimental research on factors that influence the response to written health information, aiming to answer the research question:&#13;
RQ.1 How does the reader’s attributes such as age, vocabulary skills, health literacy, reading strategy skills, along with text features, interact to predict the comprehension of health information in written texts for native language English speaker and non-native English speaker?&#13;
&#13;
We conducted the study to test the hypotheses:&#13;
1. Comprehension will be better for people with higher scores on reading skill, vocabulary and health literacy. Comprehension will be lower as age increases. &#13;
2. Comprehension will be better for responses to texts which are higher on measures of readability, cohesion, word frequency, referential cohesion, and passive sentences.&#13;
3. There will be differences between native and non-native speakers. Comprehension will be better for native speakers. The effect of age, reading strategy, vocabulary skills, and health literacy will be different for different language groups. The effect of cohesion, readability, word frequency, referential cohesion, and passive sentences will be different for different language groups.&#13;
&#13;
Ethical approval. The data collection plan and study design were reviewed and approved by a member of the Psychology Department Research Ethics Committee. &#13;
Pre-registration. The study has not been pre-registered.  &#13;
&#13;
Participants&#13;
	Participants were recruited using primarily opportunity and snowball sampling. Participants were invited using social media such as Facebook, Instagram, and WhatsApp. We aimed at recruiting Bilingual/Multilingual Indian Residents (18+) who have access to the internet. We collected 201 responses, but only 112 participants were included in our analysis due to incomplete forms by other respondents. Our criterion for including participant data in analyses was that they had to complete 80 percent or more of the survey. We had 112 responses, but we did not test any of the respondents who were aged 100. We had three respondents who were aged 100 removed from our data set, leaving us with 109 observations. To enable a comparison between native and non-native speakers of English, we combined the data on responses from Indian and Chinese non-native speakers with data on responses from native speakers of English collected previously by supervisor Rob Davies. Thus, we had a large sample size of 557 participants for analysis. We did our final analysis on 557 participants with minimum age of 18 and maximum age of 81. Average age range in sample was 28, skewing towards younger population. The sample consisted of 392 females, 160 males, 1 non-binary, and 4 prefer not to say. There were 273 participants who spoke English as their first language and 284 participants who spoke English as their second language. &#13;
	All participants were debriefed, and steps were taken to ensure confidentiality and anonymity. &#13;
&#13;
Materials&#13;
	We collected information on attributes of participants and linguistic properties of texts to see its influences on accuracy of responses made by participants to questions related health information. To measure participants attributes, we assessed demographic details, and participant’s vocabulary knowledge, health literacy, and reading strategy. Health texts differed in their linguistic properties, as measured by word frequency, readability (Flesch score and grade level), number of passive voice sentences, cohesion, and referential cohesion (Coh-Metrix).&#13;
Vocabulary knowledge.&#13;
	The Shipley Vocabulary Test (Shipley et al., 2009) was used to test participants' vocabulary knowledge as it predicts 39-45% of variance in reading comprehension (Landi, 2010). The test includes questions in a multi choice question format, with incorrect and correct answers. Each question contains a word followed by four options—one of which is the correct meaning of the word. The higher the points, the higher the level of vocabulary. &#13;
Health Literacy. &#13;
The Health Literacy Vocabulary Assessment (HLVA) developed by Ratajczak (2020), adapted for online presentation by Chadwick (2020) was used to test participants’ health literacy. The adapted version of the HLVA contains 16 multiple-choice word items. The test consists of multiple-choice questions with incorrect and correct answers. Each question contains a word followed by four options. The participant must select the correct meaning of that word. High scores on HLVA indicate high health literacy vocabulary. &#13;
Reading strategy.&#13;
To determine participants’ motivation for reading and understanding reading strategies, we used Calloway’s (2019) third sub-test: Desire for Understanding and Reading Regulation Strategies. The items have been developed to measure the extent to which readers are willing to expend cognitive effort to understand a written text (Van den Broek et al., 2001). A higher score on this measure predicts better comprehension (Calloway, 2019). &#13;
Demographics.&#13;
We collected participants’ demographic characteristics: gender (coded: Male, Female, non-binary, prefer not to say); education (coded: Secondary, Further, Higher); and ethnicity (coded: White, Black, Asian, Mixed, Other); age; native language.&#13;
Health information stimulus text sampling.&#13;
Comprehension passages are selected based on previous research paper by Davies and colleagues (in prep.) In total there are 25 comprehension passages. However, reading 25 passages in one sitting could lead to fatigue in the reader. Therefore, we created 5 sets of 5 comprehension passages. Each set contained 5 passages, which were randomly given to participants. The comprehension passages were then followed by questions in a multiple-choice question format. The response to each question is either right or wrong, which indicates whether the reader understands the passage. The questions have been constructed in ways to ensure that questions probed for the most important information in each text, such as who the information was relevant to, who was involved in diagnostic or treatment procedures, and the risks and benefits of different options. The questions were constructed in a manner that could not be answered by matching or referring to the text but required text-level and interpretation-level comprehension processing to correctly choose answer options (Kintsch, 1994).&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3266">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3267">
                <text>Excel spreadsheets - .csv &#13;
R Script - .r&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3268">
                <text>Agarwal2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3269">
                <text>Huzaifah Adam, Coco, Alex Myroshnychenko</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3270">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3271">
                <text>This work is based on Kintsch, W. (1994). Text Comprehension, Memory, and Learning. American Psychologist, 10.&#13;
White, S., Happé, F.M., Hill, E., &amp; Frith, U. (2009). Revisiting the Strange Stories: Revealing &#13;
McNamara, D., &amp; Magliano, J. (2009). Chapter 9 Toward a Comprehensive Model of Comprehension. Psychology of Learning and Motivation, 51, 297–384. https://doi.org/10.1016/S0079-7421(09)51009-2 &#13;
O’reilly, T., &amp; Mcnamara, D. S. (2007). Reversing the Reverse Cohesion Effect: Good Texts Can Be Better for Strategic, High-Knowledge Readers. Discourse Processes, 43(2), 121–152. https://doi.org/10.1080/01638530709336895&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3272">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3273">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3274">
                <text>Developmental, Other</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="161" public="1" featured="0">
    <fileContainer>
      <file fileId="158">
        <src>https://www.johnntowse.com/LUSTRE/files/original/aaba3f802433d1a1ec1b363658d8b321.docx</src>
        <authentication>23a0c8cc680512f1bf66290ce3a72da3</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3275">
                <text>Exploring the impact of rewards on contextual cueing effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3276">
                <text>Wen Fan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3277">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3278">
                <text>        There is a huge amount of complex information about visual stimuli in the environment and the individual's visual processing system has a limited capacity to process this information, so selective attentional mechanisms prioritise the most valuable information. Fixed contextual cues in the environment help us to allocate attentional resources efficiently. In their study of context, Chun and Jiang proposed a contextual cueing effect (CC effect). This effect is likely to be an implicit learning resulting from selective attention. Specifically, subjects searched for the target faster in the repeated configuration than in the random configuration, as fixed contextual cues would help locate the target. It was found that this effect could be moderated by manipulating external motivation, i.e., reward. However, there is so far considerable debate as to whether high rewards can contribute to the cc effect, and whether rewards act on the cc effect or on the positional probability learning effect. The present experiment used a classical situational cueing task and a mixed between-*within group experimental design to explore the effect of reward on the contextual cueing effect. &#13;
        The experimental results show that high rewards did not contribute more significantly to the cc effect than low rewards, but high rewards did facilitate the target probability learning effect. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3279">
                <text>contextual cueing effect, reward, selective attention </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3280">
                <text>Participants &#13;
   Fifty-two Lancaster University students (20 identified as male and 32 as female; age M=23.9, SD=2.55 years, range: 19-33 years) participated in the experiment. Two participants were excluded from the final analysis (see below for details). &#13;
   All participants had normal or corrected normal vision. Participants were informed that the three participants with the highest scores in the experiment would receive a £20 Amazon voucher as a reward. At the end of the experiment, the three participants with the highest scores had received their £20 Amazon vouchers by e-mail. &#13;
   The experiment passed ethical review by the Department of Psychology at Lancaster University. All participants were shown a participant information sheet and signed a consent form to participate in this study prior to the start of the experiment. The Participant Debrief &#13;
Sheet was presented to participants at the end of the experiment. &#13;
Materials &#13;
   The materials were created and presented with the Psychophysics Toolbox Version 3 (Brainard, 1997) MATLAB (MathWorks, Sherborn, MA) toolbox. The stimuli were displayed on an MS-Windows machine on a screen with 1920 × 1080 pixels resolution and 60 Hz refreshing rate.  &#13;
   Each display consists of 11 L-shaped and one T-shaped black 1.25° x 1.25° items, presented on a white background. The only T-shaped item in each display is the target, which has a 90° rotation clockwise (called left) or counterclockwise (called right). There were an equal number of times that the target was rotated to the left as it was rotated to the right across the experiment. The L-shaped distractors were randomly rotated by 0°, 90°, 180° or 270°. To increase the difficulty of the task (Jiang &amp; Chun, 2001), the L-shaped items had a 4-pixel offset at the junction of the lines to make them similar to the T-shaped targets. In each display, all items were balanced within the quadrant of the display. This randomisation was carried out for each subject individually. &#13;
&#13;
Experiment design &#13;
   This experiment was conducted in a quiet testing room, with each subject alone in the room to complete the experiment. The experiment consisted of 20 training blocks. Each block consisted of 16 trials. Each trial began with a 0.5 second fixation cross, followed by a search display until the subject's manual response or reached the maximum response time limit of 6 seconds. Participants were asked to respond as quickly and accurately as possible, reporting the direction of the target by pressing C or N on the standard keyboard numeric keypad, respectively ("T" stems pointing left or right). Each of the 5 training blocks was divided into one epoch, for a total of four epoch, with subjects having a fixed 30 second rest period between epochs. The whole experiment will last about 40 minutes. &#13;
   Participants will be given a score (points) after each test based on their reaction time (correct response within 2 seconds), i. e. the 'reward' for the experiment. Each subject is informed before the experiment that they will have a final score at the end of the experiment and that the top three participants with the highest scores will receive a £20 voucher. The experiment will use two reward conditions, a high (score*10) and a low (score*1) reward. In the high reward condition, the correct answer will be scored as (2000 - reaction time) *10. In the low reward condition, the correct answer will be scored as (2000 - reaction time) *1. &#13;
   For each subject, eight positions in the imaginary ring were randomly selected as target positions. Each quadrant had an equal number of target positions. In each block, each target location was presented once in a repeated display and once in a new display in the same reward condition (twice in total). In the repeated display, the position and orientation of the distractor remained constant along with the target position, while in the new display both were changed randomly. In both the new and repeated displays, the target orientation was changed randomly so that no link could be made between the repeated configurations, target locations or reward values and specific responses. &#13;
&#13;
   The eight target positions were divided into two different categories: (1) four target positions were always combined with a high reward (score*10) in both repeated and new displays; (2) the other four target positions were always combined with a low reward (score*1) in both repeated and new displays. Therefore, the configurations in the repeated trials were also only ever paired with high or low rewards. &#13;
   A mixed experiment design was used in this study, with the within-subjects factor being the feedback received after the subjects' responses. During the feedback phase of each trial, the score obtained for this experiment is displayed on the screen if the correct response is received within a time window of 2 seconds from the start of the display. The screen will also display whether this trial is a "10x bonus" one or a "normal trial". For trials with a correct response time of more than 2 seconds, no score is awarded, and the feedback is "too slow, 0 points" displayed in the centre of the screen. For trials with a reaction time of more than 6 seconds, 10,000 points will be deducted, and the feedback will be "Time out! Too slow, -10,000 points". For incorrect responses, 10,000 points will be deducted, and the feedback will be "Error! -10,000 points". The total number of points accumulated so far will be displayed below the feedback 1 second after feedback is presented. &#13;
   This experiment also had a between-subjects design in which subjects were randomly divided into two groups, with the odd-numbered participants being the “instructed group”, and those in the instructed group will see a prompt in the centre of the screen before the start of each trial, informing them that the trial is a high or low reward condition. For the high reward condition, "10x BONUS trial!" will be displayed in the centre of the screen in green. For the low reward condition, "Normal trial" will be displayed in the centre of the screen in white. Participants in the even numbered group are in the "not instructed group". Subjects in the “not-instructed group” will not see a prompt in the centre of the screen before the start of each trial and will only see if they have received 10x the reward for their score during the feedback phase.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3281">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3282">
                <text>Excel.csv&#13;
r_file.R&#13;
jasp_file.jasp&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3283">
                <text>Fan2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3284">
                <text>Jessica Andrew&#13;
Jack Ho</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3285">
                <text>Open </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3286">
                <text>none</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3287">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3288">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3289">
                <text>LA14YW</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3290">
                <text>Tom Beesley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3291">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3292">
                <text>Cognitive, Development</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3293">
                <text>52 Lancaster University students&#13;
male = 20, female = 32</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3294">
                <text>ANOVA, Bayesian Analysis, T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
