<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?collection=6&amp;output=omeka-xml&amp;page=1" accessDate="2026-05-01T20:17:17+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>1</pageNumber>
      <perPage>10</perPage>
      <totalResults>24</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="200" public="1" featured="0">
    <fileContainer>
      <file fileId="227">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e062f8b5eaffecab9990636ba589a6b1.pdf</src>
        <authentication>f34904e516c4c04821ec1e52402b3ea9</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3988">
                <text>Cerebral Lateralisation for Emotion Processing of Chimeric Faces in Individuals with Autism Spectrum Disorder </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3989">
                <text>Alexandra Crossley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3990">
                <text>5th September 2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3991">
                <text>Many studies have suggested that typical lateralisation for emotion processing tasks, such as facial emotion recognition, is lateralised to the right-hemisphere, with different emotions eliciting differing strengths of lateralisation (Bourne, 2010). However, there has been much debate as to the lateralisation of individuals with autism spectrum disorder (ASD) (Ashwin et al., 2005; Shamay-Tsoory et al., 2010). This study assessed the cerebral lateralisation of 30 adults with ASD, five children with ASD, 435 neurotypical adults and ten neurotypical children in a chimeric faces task, and aimed to identify whether the atypical lateralisation seen in children with ASD persists into adulthood (Taylor et al., 2012). Furthermore, the study aimed to identify whether lateralisation strength is affected by the emotion of the facial stimuli. No emotion- or age-related change in lateralisation was found, however, participants with ASD demonstrated a weaker right-hemispheric lateralisation compared to neurotypical participants. Therefore, this study supported the concept that individuals with ASD show atypical lateralisation which persists into adulthood, however, no evidence was found to support the concept that different emotions elicit different strengths of lateralisation.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3992">
                <text>autism spectrum disorder, cerebral lateralisation, emotion processing, adults, children, chimeric faces task</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3993">
                <text>Method&#13;
Participants&#13;
Data from a total of 481 participants with native level English proficiency (or age expected language development in children), normal or corrected-to-normal vision and no history of neurological disease or hearing loss were analysed for the current study (Table 1). Participants in the group ‘adults with ASD’ (N = 30; age: M = 30.17, SD = 9.85) were recruited through adverts on social media, through Prolific Academic (www.prolific.co), and through word of mouth. Participants in the groups ‘children with ASD’ (N = 5; age: M = 6.8, SD = 1.48) and ‘neurotypical children’ (N = 11; age: M = 7.0, SD = 1.90) were recruited through primary schools and word of mouth (Brooks, 2023), and parents of potential child participants were required to email a researcher to express their interest in participation. Participants in the group ‘neurotypical adults’ (N = 435; age: M = 29.44, SD = 8.03) were recruited through Prolific Academic (www.prolific.co) as part of a larger online behavioural laterality battery (Parker et al., 2021). Of the 481 participants who took part in the study, 32 were excluded during the data cleaning process (see Table 1 and Data Analysis for further information).&#13;
&#13;
Measures&#13;
As part of the study, a series of questionnaires were administered to collect information about the participants to ensure that individual differences could be accounted for. Participants were asked to complete the study and its associated questionnaires and tasks prior to beginning the main chimeric faces task, and were requested to use a desktop or laptop computer for the entirety of the study. For the ‘neurotypical children’ and ‘children with ASD’ groups, parents were asked to complete the questionnaires on behalf of the children and were asked to be present for the tasks, which were completed during a Microsoft Teams call with a researcher.&#13;
The study was completed online using the Gorilla Experiment Builder (www.gorilla.sc), a cloud-based tool for collecting data in the behavioural sciences.&#13;
&#13;
Demographic Questionnaire&#13;
The demographic questionnaire asked participants their age, gender, length of time in education (in years), language status, two questions assessing handedness (“Which is your dominant hand? / Which hand do you prefer to use for tasks such as writing, cutting, and catching a ball?”) and footedness (“Which foot do you normally use to step up on a ladder/step?”), and two eye dominance tests (Miles, 1929; Porac &amp; Coren, 1976). Participants were also asked whether they had a diagnosis of any developmental disorders, including ASD, dyslexia, attention deficit hyperactivity disorder or a language disorder (such as 'developmental language disorder' or 'specific language impairment'). For each diagnosis, participants had the option to answer “Yes”, “No”, or “Prefer not to say”, with the exception of ASD which also had the option to answer “No but I am self-diagnosed”. At this point, participants were sorted into their groups based on age (‘children’: five- to 11-years-old; or ‘adults’: 18- to 50-years-old) and ASD diagnosis (‘with ASD’, or ‘neurotypical’). Adults with a self-diagnosis of ASD were included in the ‘adults with ASD’ group.&#13;
&#13;
Edinburgh Handedness Inventory&#13;
The Edinburgh Handedness Inventory (EHI; Oldfield, 1971) was administered to provide a scaled score of handedness. Adult participants were asked to score ten daily tasks on a five-point Likert scale based on which hand they preferred to use during each task (“Left hand strongly preferred” = 2, “Left hand preferred” = 1, “No preference” = 0, “Right hand preferred” = 1, or “Right hand strongly preferred” = 2). These tasks included daily activities such as writing, brushing teeth, and opening a box. The EHI was scored by combining the direction and exclusiveness of the hand preference. Two totals were created: one of right-hand preference and one of left-hand preference. The difference was then found by subtracting the left-hand total from the right-hand total. This was then divided by the total score of both hand preference scores and multiplied by 100 (i.e., 100 x (right-hand total – left-hand total) / (right-hand total + left-hand total)). Final EHI scores ranged from -100 to +100, with positive scores indicating right-handedness, and negative scores indicating left-handedness. Child participants were not required to complete the EHI questionnaire.&#13;
&#13;
Lexical Test for Advanced Learners of English&#13;
A version of the Lexical Test for Advanced Learners of English (LexTALE; Lemhöfer &amp; Broersma, 2012) was provided to assess the participants’ level of proficiency in English. Within this, adult participants were shown 60 written stimuli comprised of English words and pseudowords (words that follow the orthographical and phonetic rules of the English language and are pronounceable but are otherwise nonsense words, e.g. ‘proom’) and asked to assess whether each word was an existing English word or not. Scores of the test were collected by averaging the percentages of correct answers for English words and pseudowords, with final scores ranging from 0-100. Child participants were not required to complete the LexTALE task.&#13;
&#13;
Autism-Spectrum Quotient (Short Version)&#13;
An abridged version of the Autism-Spectrum Quotient (AQ-Short; Hoekstra et al., 2011) was used to provide a measure of ASD traits. Participants with ASD were asked to rate 28 statements on a four-point Likert scale based on their level of agreement, with each answer accruing a different number of points (“Definitely agree” = 1, “Slightly agree” = 2, “Slightly disagree” = 3, or “Definitely disagree” = 4). On items in which “Definitely agree” represented a characteristic of ASD, the scoring was reversed. The scores for each question were totalled, with potential scores ranging between 28 (no ASD traits) to 112 (full inclusion of all ASD traits). Scores above 65 indicated ASD traits to a diagnosable degree. Neurotypical participants were not required to complete the AQ-Short questionnaire.&#13;
&#13;
Procedure Lateralisation for Facial Emotion Processing Task&#13;
A chimeric faces task was used to assess lateralisation for facial emotion processing.&#13;
Stimuli. The chimeric faces stimuli were created by Dr Michael Burt (Burt &amp; Perrett, 1997) and provided by Parker et al. (2021).&#13;
A collection of 16 different facial stimuli were created by merging two photographs of a man’s face depicting one of four emotions (‘happiness’, ‘sadness’, ‘anger’, or ‘disgust’) vertically down the centre of the face and blended at the midline (see Figure 1 for an example). Each emotion was paired either with itself, causing both hemifaces of the facial stimuli to match in emotion (a ‘same face’), or with a differing emotion, causing both hemifaces of the facial stimuli to be different (a ‘chimeric face’). Of the 16 stimuli, 12 were ‘chimeric face’ and four were ‘same face’.&#13;
Task. Each trial began with a fixation cross shown for 1000ms, followed by the face stimuli for 400ms. Participants then recorded which emotion they saw most strongly by clicking the corresponding button from a choice of the four emotions (Figure 2). For the children, emoticons were used instead of written words (Oleszkiewicz et al., 2017) (Figure 3). A response triggered the beginning of the next trial, with a time-out duration set at 10400ms after which the next trial was triggered automatically. Response choice and response times were recorded.&#13;
The task was split into four blocks of trials with a break between each block. Stimuli were presented in a random order and shown twice in each block, resulting in the participants being shown 32 stimuli per block and a total of 128 within the whole task.&#13;
&#13;
&#13;
Participants were familiarised with the stimuli at the start of the task, with the ‘same face’ stimuli being shown alongside a label explaining which emotion was being presented, to ensure they could recognise the emotions. A practice block was given at the start of the task to ensure participants knew how to complete the task, using the emotions ‘surprise’ and ‘fear’.&#13;
&#13;
Additional Measures&#13;
As data collection also included tasks for other studies, participants were also asked to complete a version of the Empathy Quotient – short (Wakabayashi et al., 2006), and undertake a dichotic listening task and its associated device checks (Parker et al., 2021). As these items were not part of the main study, participants were asked to complete these following the completion of the main study and its associated questionnaires and tasks, to ensure any findings from the study were not due to the additional measures.&#13;
&#13;
Laterality Index&#13;
A laterality index (LI) for each participant was calculated using the same method as Parker et al. (2021) by finding the difference between the number of times the participant chose the right-hemiface emotion and the left-hemiface emotion. This was then divided by the total number of times they chose either the right- or left-hemiface emotion, and multiplied by 100 (i.e., 100 x (right hemiface – left hemiface) / (right hemiface + left hemiface)). Scores ranged between -100 and +100, with a negative LI indicating a left-hemiface bias, and thus, a right-hemispheric dominance, and a positive LI showing the opposite.&#13;
&#13;
Data Analysis&#13;
Participants who scored less than 80 on the LexTALE task were removed as it was deemed their understanding of the English language was not strong enough and may cause issues with understanding the instructions (Parker et al., 2021). Furthermore, all trials with a response time faster than 200ms were removed as it was suggested that responses at this speed were too quick to have been based on the processing of the stimuli (Parker et al., 2021). In addition to this, outlier response times for each participant were removed using Hoaglin &amp; Iglewicz's (1987) procedure. Within this, outliers were any response times 1.65 times the difference between the first and third quartiles, below the first quartile or above the third (e.g., below Q1 – (1.65 x (Q3-Q1)), and above Q3 + (1.65 x (Q3-Q1))). Following the removal of all outlying trials, any participant with less than 80% of trials remaining were removed. In addition to this, participants who scored less than 75% on ‘same face’ trials (trials in which both hemifaces depicted the same emotion) were noted, because emotion processing is an area of difficulty for individuals with ASD. Within this, three participants in the ‘children with ASD’ group (60%), three participants in the 'neurotypical children’ group (27.27%), four participants in the ‘adults with ASD group (13.33%), and 30 participants in the ‘neurotypical adults’ group (7.41%) scored less than 75% on ‘same face’ trials, suggesting they had difficulties identifying the emotions.&#13;
To address the hypotheses, a linear model was performed using LI as the outcome and group (‘ASD’ or ‘neurotypical’), age (‘adult’ or ‘child’) and emotion (‘happy’ and ‘angry’, or ‘sad’ and ‘disgust’) as the predictors, including interactions between each predictor (Group x Age; Group x Emotion; Age x Emotion; and a three-way interaction, Group x Age x Emotion).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3994">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3995">
                <text>.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3996">
                <text>Crossley2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3997">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3998">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3999">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4000">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="4001">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4027">
                <text>Mshary Al Jaber</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="4002">
                <text>Margriet Groen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="4003">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="4004">
                <text>Developmental, Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="4005">
                <text>481 participants with native level English proficiency, 164 Male, 240 female and 1 other.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="4006">
                <text>Linear Mixed Effects Modelling and T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="173" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3503">
                <text>Does implicit mentalising involve the representation of others’ mental state content?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3504">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3505">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3506">
                <text>Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3507">
                <text>implicit mentalising, co-representation, joint action, domain specificity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3508">
                <text>Pre-test: Selection of Suitable Stimuli&#13;
Participants&#13;
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.&#13;
Stimuli and Materials&#13;
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.&#13;
Design and Procedure&#13;
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design. &#13;
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.&#13;
Main Experiment&#13;
Participants&#13;
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.&#13;
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52. &#13;
Stimuli and Materials&#13;
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians. &#13;
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).&#13;
Figure 1&#13;
Example of Stimuli Used in Simon Task &#13;
&#13;
Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.&#13;
Design and Procedure&#13;
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.&#13;
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.&#13;
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.&#13;
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.&#13;
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding). &#13;
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).&#13;
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables. &#13;
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.&#13;
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.&#13;
Figure 2&#13;
Inclusion of Other in the Self (IOS) scale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3509">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3510">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3511">
                <text>Wong07092022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3512">
                <text>Elisha Moreton&#13;
Aubrey Covill</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3513">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3514">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3515">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3516">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3517">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="161" public="1" featured="0">
    <fileContainer>
      <file fileId="158">
        <src>https://www.johnntowse.com/LUSTRE/files/original/aaba3f802433d1a1ec1b363658d8b321.docx</src>
        <authentication>23a0c8cc680512f1bf66290ce3a72da3</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3275">
                <text>Exploring the impact of rewards on contextual cueing effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3276">
                <text>Wen Fan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3277">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3278">
                <text>        There is a huge amount of complex information about visual stimuli in the environment and the individual's visual processing system has a limited capacity to process this information, so selective attentional mechanisms prioritise the most valuable information. Fixed contextual cues in the environment help us to allocate attentional resources efficiently. In their study of context, Chun and Jiang proposed a contextual cueing effect (CC effect). This effect is likely to be an implicit learning resulting from selective attention. Specifically, subjects searched for the target faster in the repeated configuration than in the random configuration, as fixed contextual cues would help locate the target. It was found that this effect could be moderated by manipulating external motivation, i.e., reward. However, there is so far considerable debate as to whether high rewards can contribute to the cc effect, and whether rewards act on the cc effect or on the positional probability learning effect. The present experiment used a classical situational cueing task and a mixed between-*within group experimental design to explore the effect of reward on the contextual cueing effect. &#13;
        The experimental results show that high rewards did not contribute more significantly to the cc effect than low rewards, but high rewards did facilitate the target probability learning effect. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3279">
                <text>contextual cueing effect, reward, selective attention </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3280">
                <text>Participants &#13;
   Fifty-two Lancaster University students (20 identified as male and 32 as female; age M=23.9, SD=2.55 years, range: 19-33 years) participated in the experiment. Two participants were excluded from the final analysis (see below for details). &#13;
   All participants had normal or corrected normal vision. Participants were informed that the three participants with the highest scores in the experiment would receive a £20 Amazon voucher as a reward. At the end of the experiment, the three participants with the highest scores had received their £20 Amazon vouchers by e-mail. &#13;
   The experiment passed ethical review by the Department of Psychology at Lancaster University. All participants were shown a participant information sheet and signed a consent form to participate in this study prior to the start of the experiment. The Participant Debrief &#13;
Sheet was presented to participants at the end of the experiment. &#13;
Materials &#13;
   The materials were created and presented with the Psychophysics Toolbox Version 3 (Brainard, 1997) MATLAB (MathWorks, Sherborn, MA) toolbox. The stimuli were displayed on an MS-Windows machine on a screen with 1920 × 1080 pixels resolution and 60 Hz refreshing rate.  &#13;
   Each display consists of 11 L-shaped and one T-shaped black 1.25° x 1.25° items, presented on a white background. The only T-shaped item in each display is the target, which has a 90° rotation clockwise (called left) or counterclockwise (called right). There were an equal number of times that the target was rotated to the left as it was rotated to the right across the experiment. The L-shaped distractors were randomly rotated by 0°, 90°, 180° or 270°. To increase the difficulty of the task (Jiang &amp; Chun, 2001), the L-shaped items had a 4-pixel offset at the junction of the lines to make them similar to the T-shaped targets. In each display, all items were balanced within the quadrant of the display. This randomisation was carried out for each subject individually. &#13;
&#13;
Experiment design &#13;
   This experiment was conducted in a quiet testing room, with each subject alone in the room to complete the experiment. The experiment consisted of 20 training blocks. Each block consisted of 16 trials. Each trial began with a 0.5 second fixation cross, followed by a search display until the subject's manual response or reached the maximum response time limit of 6 seconds. Participants were asked to respond as quickly and accurately as possible, reporting the direction of the target by pressing C or N on the standard keyboard numeric keypad, respectively ("T" stems pointing left or right). Each of the 5 training blocks was divided into one epoch, for a total of four epoch, with subjects having a fixed 30 second rest period between epochs. The whole experiment will last about 40 minutes. &#13;
   Participants will be given a score (points) after each test based on their reaction time (correct response within 2 seconds), i. e. the 'reward' for the experiment. Each subject is informed before the experiment that they will have a final score at the end of the experiment and that the top three participants with the highest scores will receive a £20 voucher. The experiment will use two reward conditions, a high (score*10) and a low (score*1) reward. In the high reward condition, the correct answer will be scored as (2000 - reaction time) *10. In the low reward condition, the correct answer will be scored as (2000 - reaction time) *1. &#13;
   For each subject, eight positions in the imaginary ring were randomly selected as target positions. Each quadrant had an equal number of target positions. In each block, each target location was presented once in a repeated display and once in a new display in the same reward condition (twice in total). In the repeated display, the position and orientation of the distractor remained constant along with the target position, while in the new display both were changed randomly. In both the new and repeated displays, the target orientation was changed randomly so that no link could be made between the repeated configurations, target locations or reward values and specific responses. &#13;
&#13;
   The eight target positions were divided into two different categories: (1) four target positions were always combined with a high reward (score*10) in both repeated and new displays; (2) the other four target positions were always combined with a low reward (score*1) in both repeated and new displays. Therefore, the configurations in the repeated trials were also only ever paired with high or low rewards. &#13;
   A mixed experiment design was used in this study, with the within-subjects factor being the feedback received after the subjects' responses. During the feedback phase of each trial, the score obtained for this experiment is displayed on the screen if the correct response is received within a time window of 2 seconds from the start of the display. The screen will also display whether this trial is a "10x bonus" one or a "normal trial". For trials with a correct response time of more than 2 seconds, no score is awarded, and the feedback is "too slow, 0 points" displayed in the centre of the screen. For trials with a reaction time of more than 6 seconds, 10,000 points will be deducted, and the feedback will be "Time out! Too slow, -10,000 points". For incorrect responses, 10,000 points will be deducted, and the feedback will be "Error! -10,000 points". The total number of points accumulated so far will be displayed below the feedback 1 second after feedback is presented. &#13;
   This experiment also had a between-subjects design in which subjects were randomly divided into two groups, with the odd-numbered participants being the “instructed group”, and those in the instructed group will see a prompt in the centre of the screen before the start of each trial, informing them that the trial is a high or low reward condition. For the high reward condition, "10x BONUS trial!" will be displayed in the centre of the screen in green. For the low reward condition, "Normal trial" will be displayed in the centre of the screen in white. Participants in the even numbered group are in the "not instructed group". Subjects in the “not-instructed group” will not see a prompt in the centre of the screen before the start of each trial and will only see if they have received 10x the reward for their score during the feedback phase.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3281">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3282">
                <text>Excel.csv&#13;
r_file.R&#13;
jasp_file.jasp&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3283">
                <text>Fan2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3284">
                <text>Jessica Andrew&#13;
Jack Ho</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3285">
                <text>Open </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3286">
                <text>none</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3287">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3288">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3289">
                <text>LA14YW</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3290">
                <text>Tom Beesley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3291">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3292">
                <text>Cognitive, Development</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3293">
                <text>52 Lancaster University students&#13;
male = 20, female = 32</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3294">
                <text>ANOVA, Bayesian Analysis, T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="153" public="1" featured="0">
    <fileContainer>
      <file fileId="169">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c32bb813b138e5706ec76bb2e9c3a7b3.doc</src>
        <authentication>f4062334d78cf5f0c54a8646bfb0feb2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3150">
                <text>Grasping Ability in Virtual Reality: Effects of Eating Disorders on Perceptions of Action Capabilities</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3151">
                <text>Siri Sudhakar</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3152">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3153">
                <text>Knowledge of one’s body size is vital to be able to accurately judge an object’s size. For example, knowing the length of your arm is crucial to estimating the maximum distance reachable. Accurate perception of action capabilities is the result of a healthy mental body representation at a conscious and implicit level. This ability to use one’s mental body representation in action perception is assumed to be distorted in individuals with eating disorders (ED). However, unlike prior research, this study will be investigating both the effect of body image and schema distortion on action capabilities. Thus, this study will assess whether the ability to update one’s perception of their action capabilities in response to morphological changes is altered in individuals with EDs. The experiment had participants (N = 20) embody small (50% of hand size), normal, and large (150% of hand size) avatar hands (in virtual reality) and then estimate the maximum size of a box graspable. The size of the box, beginning as either large or small across all three conditions, was manipulated to observe haptic perception in participants. We found that individuals with ED showed similar estimates despite embodying different hand sizes alluding to their inability to successfully update their haptic perceptions. Low interoceptive awareness and body image disturbances were the root cause of this perceptional flaw in eating-disordered individuals. Treatment focused on improving the altered IA and implicit distortions in body schema could improve haptic perception in ED individuals.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3154">
                <text>Action Capability, Eating Disorder, Interoceptive Awareness</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3155">
                <text>A priori power analysis was conducted through the G*Power software (Faul et al., 2007) to determine the sample size required to achieve adequate power (N = 30). The required power (1- β) was set at .80 and the significance level (α) was set to .05. Based on Readman et al. (2021), who used the same methodology as this study, we anticipated a large&#13;
effect size of 0.9. This was deduced as this study obtained a ηp2 of .49 with a sample of N =30. For the frequentist parameters defined, a sample size of N = 3 is required to achieve a power of .80 at an alpha of .05.&#13;
EDs are also notoriously variable. Given that previous studies using similar methodologies have typically recruited between 20-30 participants (Readman et al., 2020; Lin et al., 2020), we elected to recruit 30 participants (15 per condition). However, this study was only able to recruit 23 participants in total.&#13;
22 participants from Lancaster and Lancaster University (seven males, 15 females) aged between 18-30 (Mage = 21.73, SDage = 1.98) participated in this study. Two participants were removed due to being extreme outliers resulting in the present dataset (N = 20; Mage = 21.65, SDage = 2.06).&#13;
Amongst participants of this study, seven participants disclosed a diagnosis of ED. In accordance with the revised Edinburgh Handedness Inventory (R-EHI) classification system (Milenkovic &amp; Dragovic, 2013), the majority of the participants (N = 19) were right-handed, with only one participant being left-handed. Borderline to high levels of anxiety, as measured through the Hospital Anxiety and Depression Scale (HADS; Stern, 2014), was observed in 16 participants, while seven participants showed similar levels of depression.&#13;
Eating Disorder Inventory (EDI): Participants with ED were also asked to complete the EDI. It is a self-report questionnaire that can assess the presence and level (depending on the estimate) of AN, BN, and Binge Eating Disorder (BED) (Augestad and Flanders, 2002). It consists of 64 items, with eight subscales measuring dimensions such as drive of thinness, body dissatisfaction, perfectionism, interpersonal distrust, and IA (Garner, Olmstead, &amp; Polivy, 1983; Vinai et al., 2016; Santangelo et al., 2022). Seven participants had ED while the remaining formed the healthy control group.​&#13;
Design&#13;
This study includes variables in a 2 (Between factor: Group – Control vs. ED) x 3 (Within: Hand size – small vs. normal vs. large) factorial design. The dependent variable (DV) is the grasping ability, and the independent values are the groups involved and the hand size conditions. All participants of each group experienced all conditions of the hand size. The order of condition completion was randomised across participants through use of a Latin square method. Such counterbalancing allows for the control of confounding/extraneous variables and diminishes order and sequence effects, improving internal validity (Corriero, 2017).&#13;
Stimuli and Apparatus&#13;
Participants were seated an arm’s length away from the front of a standardized table. Unity 3D© Gaming Engine with the Leap motion Plugin was used to create a virtual environment in 3D VR colour. Participants were able to view this environment through an Oculus Rift CV1 Head Mounted Display (HMD). The HMD displayed the stereoscopic environment at 2,160 × 1,200 at 90 Hz split (Binstock, 2015). Head and hand movements were tracked in real-time by the HMD and the Leap motion hand-tracking sensor attached to the HMD.&#13;
The HMD ensured that the participants’ perspective was updated in real-time. Hand movements were updated in accordance with the virtual hand that was mapped onto the participant’s natural hands. Leap Motion for Unity provided assets such as avatar hands based on actual human hands. The virtual environment was visible to the participants in a first-person perspective adjusted to their height. The VR display is comprised of a model room, with a table located in the middle. Upon this table were either two white dots (Calibration trials) or a white box (Test trails).&#13;
 &#13;
 &#13;
Questionnaires&#13;
Revised Edinburgh Handedness Inventory (R-EHI). Participants’ handedness was deduced using the R-EHI. The modified version of the inventory was used as it accounted for and improved the inconsistencies and validity compared to the past questionnaire (Milenkovic &amp; Dragovic, 2013). Participants are estimated on handedness depending on their preferences of either hand for doing activities such as writing, drawing, throwing a ball, etc.&#13;
Hospital Anxiety and Depression Scale (HADS). The HADS questionnaire was also provided to all participants to assess the presence of borderline or abnormal levels of anxiety and depression in them. It is a quick questionnaire consisting of seven questions each for anxiety and depression, with both being scored separately (Stern, 2014).&#13;
Procedure&#13;
Participation in this study took up to an hour of the participant’s time. It was conducted in the Whewell Building of Lancaster University. Participants were recruited partly through opportunity sampling, and advertisements. All participants received £5 for their contribution to this study. All participants were native English speakers, had normal or corrected vision, and had no motor difficulties. Participants provided informed consent through a consent form signed before the onset of the study. They were also provided a debrief sheet and were verbally debriefed at the end of the experiment.&#13;
The methodology of this study mirrors that of Readman et al. (2021). The experiment was conducted in a virtual environment (VE) through a VR device. The inclusion of VR allows for controlled changes to grasping ability, with responses collected similar to how an individual would act in the real world (Normand et al., 2011). Moreover, the inclusion of VR enabled interactions with the morphologically altered virtual body in real-time, and in a similar physical environment through the immersive system built through the head-mounted displays (HMD) and motion sensors (Gan et al., 2021).&#13;
Participants completed the R-EHI, EDI, and HADS questionnaires before beginning the experiment. Participants were asked to don the HMD and introduced to the virtual environment through a brief demonstration. They were given approximately 5 minutes to explore the environment, to familiarise themselves with the immersive VR experience and ensure no undue effects occur. Participants completed three experimental conditions: Normal hand size, constricted hand size (50% of their hand size), and extended hand size (150% of their hand size). Each condition consisted of calibration and test trials.&#13;
Calibration trials. Participants were presented with the virtual table upon which two horizontally spaced dots were located. Using their dominant hand, participants were asked to touch the left-most dot with their left-most digit and then touch the right-most dot with the right-most digit of their dominant hand. This occurred for 30 trials to ensure that the participant has habituated to the virtual hand.&#13;
Test trials. The participants were instructed to place their hands behind their backs, out of sight. The Leap Motion sensor was then temporarily paused to ensure that the virtual hands are not visible to the participants. On ensuring this position, participants were then presented with a block in the VE, that they had to envision they could grasp with their dominant hand from above. The size of the block was manipulated, making it either larger or smaller, with each alteration causing 1cm changes. The participant was asked to tell the researcher when the block reflects the maximum size that they would be able to grasp. The final size was saved before the participant was presented with another block.&#13;
Grasping was defined to participants as the ability to place their thumb on one edge of the block and extend their hand over the surface of the block and place one of their fingers on the parallel edge of the block. This grasp was also demonstrated to participants. Participants completed four test trials; in two test trials, the block started small (0.03 cm) and was made larger. In the remaining two trials the block started large (0.20 cm) and was made smaller. This was done to omit the hysteresis effect, which would cause prior visual stimuli to influence later perception (Poltoratski &amp; Tong, 2014). Therefore, four grasp-ability estimates were obtained for each experimental condition.&#13;
This study received ethical approval from Lancaster University Psychology department.&#13;
 &#13;
Data Analysis&#13;
An Analysis of Variance (ANOVA) is a statistical model used to examine differences in means (Rucci &amp; Tweney, 1980). The present dataset contains both between-subjects (group) and within-subjects (hand size) factors. Thus, a mixed ANOVA would allow us to compare these variables and the means of the groups they are cross classified with.&#13;
This is a two-way analysis as there are two independent variables (group and hand size) but only one DV (grasping ability estimate). Analysis through ANOVA is appropriate for this dataset as the effect of both variables in this study can be studied on the response estimate (Field, 2009). This study aims to establish the effect of group and hand size on grasping ability (GA). Therefore, a mixed ANOVA would help us identify the significant effect of either factor on the GA estimate and examine their interaction effect. Results of the mixed ANOVA analysis would help assess whether individuals with ED do update to changes in morphology.&#13;
Data Preparation&#13;
The present dataset combined demographic, physicality, and questionnaires related (EDI, R-EHI, HADS) information and GA estimates across the hand size conditions (small vs normal vs large). GA estimate of each condition was further sub-categorized into whether the box started large or small with four trials each. Averages of these four trials for the small starting box and large starting box for each condition was taken forming the mean grasp-ability estimates (cm).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3156">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3157">
                <text>Data/excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3158">
                <text>SUDHAKAR2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3159">
                <text>Alexia Hockett &#13;
Romina Ghaleh Joujahri</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3160">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3161">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3162">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3163">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3164">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3255">
                <text>Dr. Megan Rose Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3256">
                <text>MSc </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3257">
                <text>Cognitive, Perception </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3258">
                <text>20</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3259">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="149" public="1" featured="0">
    <fileContainer>
      <file fileId="144">
        <src>https://www.johnntowse.com/LUSTRE/files/original/17e340bee54ebac611344515a86f9ff6.pdf</src>
        <authentication>4a222c6141db92dc7ee55aa00fb0d0ce</authentication>
      </file>
      <file fileId="145">
        <src>https://www.johnntowse.com/LUSTRE/files/original/896fd29b37e809eb53d43c14fa1b8eca.zip</src>
        <authentication>a0f3346a973237810f84764261f03f24</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3082">
                <text>Does implicit mentalising involve the representation of others’ mental state content? </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3083">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3084">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3085">
                <text>Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3086">
                <text>implicit mentalising, co-representation, joint action, domain specificity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3087">
                <text>Pre-test: Selection of Suitable Stimuli&#13;
Participants&#13;
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.&#13;
Stimuli and Materials&#13;
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.&#13;
Design and Procedure&#13;
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design. &#13;
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.&#13;
Main Experiment&#13;
Participants&#13;
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.&#13;
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52. &#13;
Stimuli and Materials&#13;
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians. &#13;
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).&#13;
Figure 1&#13;
Example of Stimuli Used in Simon Task &#13;
 &#13;
Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.&#13;
Design and Procedure&#13;
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.&#13;
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen  . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.&#13;
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.&#13;
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.&#13;
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding). &#13;
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).&#13;
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables. &#13;
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.&#13;
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.&#13;
Figure 2&#13;
Inclusion of Other in the Self (IOS) scale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3088">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3089">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3090">
                <text>Wong07092022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3091">
                <text>Malcolm Wong&#13;
Aubrey Covill&#13;
Elisha Moreton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3092">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3093">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3094">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3095">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3096">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3097">
                <text>Dr. Jessica Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3098">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3099">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3100">
                <text>25 in a pre-test, 65 in the main experiment</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3101">
                <text>Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="145" public="1" featured="0">
    <fileContainer>
      <file fileId="136">
        <src>https://www.johnntowse.com/LUSTRE/files/original/78ebb8c54e3cbdb306df0d2337a3ee7a.pdf</src>
        <authentication>eff2d992759a35de11f501a68f43047f</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3001">
                <text>Age-Related Changes in the Attentional Modulation of Temporal Binding </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3002">
                <text>Jessica Pepper</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3003">
                <text>8th September 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3004">
                <text>In multisensory integration, the time range within which visual and auditory information can be perceived as synchronous and bound together is known as the temporal binding window (TBW). With increasing age, the TBW becomes wider, such that older adults erroneously, and often dangerously, integrate sensory inputs that are asynchronous. Recent research suggests that attentional cues can narrow the width of the TBW in younger adults, sharpening temporal perception and increasing the accuracy of integration. However, due to their age-related declines in attentional control, it is not yet known whether older adults can deploy attentional resources to narrow the TBW in the same way as younger adults.&#13;
This study investigated the age-related changes to the attentional modulation of the TBW. 30 younger and 30 older adults completed a cued-spatial-attention version of the stream-bounce illusion, assessing the extent to which the visual and auditory stimuli were integrated when presented at three different stimulus onset asynchronies, and when attending to a validly-cued or invalidly-cued location. &#13;
A 2x2x3 mixed ANOVA revealed that when participants attended to the validly-cued location (i.e. when attention was present), susceptibility to the stream-bounce illusion decreased. However, crucially, this attentional manipulation affected audiovisual integration in younger adults but not in older adults. Whilst no definitive conclusions could be drawn about the width of the TBW, the findings suggest that older adults have multisensory integration-related attentional deficits. Directions for future research and practical applications surrounding treatments to improve the safety of older adults’ perception and navigation through the environment are discussed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3005">
                <text>Ageing, attention, TBW, multisensory integration, stream-bounce illusion</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3006">
                <text>Participants&#13;
This study used a total of 60 participants; 30 younger adults (15 males, 15 females) between 18-35 years old (M = 21.37, SD = 1.30) and 30 older adults (11 males, 19 females) between 60-80 years old (M = 67.91, SD = 4.71). This sample size was determined via an a-priori power analysis using the data of Donohue et al. (2015) and Chen et al. (2021), who conducted similar experiments (see pre-registration on www.aspredicted.com, project ID #65513). All participants were fluent English speakers. Participants were required to have normal or corrected-to-normal vision. Participants were ineligible to proceed with the experiment if they had a history or current diagnosis of neurological conditions (e.g. epilepsy, mild cognitive impairment, dementia, Parkinson’s Disease) or learning impairments (e.g. dyslexia), or had severe hearing loss resulting in the wearing of hearing aids.&#13;
Participants were recruited via opportunity sampling; the majority of younger participants were students at Lancaster University and were known to the researcher, whilst the majority of older participants were members of the Centre for Ageing Research at Lancaster University. All participants were able to provide informed consent. &#13;
&#13;
Pre-screening tools&#13;
Participants were asked to complete two pre-screening questionnaires using Qualtrics survey software (www.qualtrics.com), to assess their eligibility for the study.&#13;
Speech, Spatial and Quality of Hearing Questionnaire (SSQ; Appendix A; Gatehouse &amp; Noble, 2004). Participants rated their hearing ability in different acoustic scenarios using a sliding scale from 0-10 (0=“Not at all”, 10=“Perfectly”). Whilst, at present, no defined cut-off score on the SSQ is available as a parameter to inform decision-making, previous studies have indicated that a mean score of 5.5 is indicative of moderate hearing loss (Gatehouse &amp; Noble, 2004). As a result, people whose average score on the SSQ was lower than 5.5 were not eligible to participate in the experiment.&#13;
Informant Questionnaire on Cognitive Decline in the Elderly (IQ-CODE; Appendix B; Jorm, 2004). Participants rated how their performance in certain tasks now has changed compared to 10 years ago, answering on a 5-point Likert scale (1=“Much Improved”, 5=“Much worse”). An average score of approximately 3.3 is the usual cut-off point when evaluating cognitive impairment and dementia (Jorm, 2004), therefore people whose average score was higher than 3.3 were not eligible to participate in the experiment. &#13;
The mean scores of each pre-screening questionnaire are displayed in Table 1. An independent t-test revealed that there was no significant difference between age groups on the SSQ questionnaire [t(58) = -1.15, p=.253]; however, there was a significant difference between age groups on the IQ-CODE questionnaire [t(58) = -13.29, p&lt;.001].&#13;
Table 1&#13;
Mean scores on the SSQ and IQ-CODE pre-screening questionnaires, for both younger and older adults. Standard deviations displayed in parentheses.&#13;
Age group	SSQ	IQ-CODE&#13;
Younger	8.34&#13;
(1.10)	1.74&#13;
(0.51)&#13;
Older	8.67&#13;
(1.13)	3.03&#13;
(0.09)&#13;
&#13;
&#13;
Experimental Design&#13;
This research implemented a 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(Stimulus Onset Asynchrony [SOA]: Visual Only [VO] vs 0 milliseconds vs 150 milliseconds vs 300 milliseconds) mixed design, with Age as a between-subjects factor and Cue and SOA as within-subjects factors.&#13;
The experiment consisted of 16 different trial conditions (Table 2), randomised across all participants. Replicating the paradigm used by Donohue et al. (2015), the experimental block contained 72 validly-cued trials and 24 invalidly-cued trials, which were equally distributed between each side of the screen (left/right) and SOA conditions; this means that each participant completed 144 valid trials and 48 invalid trials for each SOA.  &#13;
&#13;
&#13;
Table 2&#13;
Number of trials within each Cue and SOA condition. All participants completed a total of 768 trials.&#13;
SOA (ms)	Cue&#13;
	Valid (Left)&#13;
N	Valid (Right)&#13;
N	Invalid (Left)&#13;
N	Invalid (Right)&#13;
N&#13;
0	72	72	24	24&#13;
150	72	72	24	24&#13;
300	72	72	24	24&#13;
VO	72	72	24	24&#13;
&#13;
&#13;
Stimuli and Materials&#13;
Participants completed the experiment remotely, in a quiet room on a desktop or laptop computer with a standard keyboard. All participants were asked to wear headphones/earphones. A volume check was conducted at the beginning of the experiment; participants were presented with a constant tone and asked to adjust the volume of this tone to a clear and comfortable level. &#13;
The stimuli used in the task were replicated from Donohue et al. (2015). Each trial started with an attentional cue in the centre of the screen – a letter “L” or a letter “R” instructing participants to focus on the left or the right side of the screen. In addition to this, 2 pairs of circles were positioned at the top of the screen, one pair in the left hemifield and one pair in the right hemifield. The attentional cue lasted for 1 second, and 650 milliseconds after this cue disappeared, the circles in each pair started to move towards each other downwards diagonally (i.e. the two left circles moving towards each other and the two right circles moving towards each other). &#13;
In the trials, one pair of circles moved towards each other, intersected, and continued on the same trajectory (fully overlapping and moving away from each other). This full motion of the circles formed an “X” shape, with the circles appearing to “stream” or “pass through” each other. On the opposite side of the screen, the other pair of circles stopped moving before they intersected, forming half of this “X” motion. On 75% of the trials, the full “X”-shaped motion appeared on the side of the screen that the cue directed participants towards (validly-cued trials); on the other 25% of trials, the full motion occurred on opposite side of the screen to where the cue indicated, and the stopped motion occurred at the cued location (invalidly-cued trials).&#13;
In addition to these visual stimuli, on 75% of the trials, an auditory stimulus was played binaurally (500Hz, 17 milliseconds), either at the same time as the circles intersected (0ms delay), 150ms after the intersection or 300ms after the intersection. The remaining 25% of the trials were visual-only (i.e. no sound was played). Participants were told that regardless of whether a sound was played, they must make their pass/bounce judgements based on the full motion of the circles (the “X” shape), even if the full motion occurred at the opposite side of the screen that they were attending to. &#13;
The experiment ended after all 768 trials – participation lasted approximately 1 hour. The experiment was built in PsychoPy2 (Pierce et al., 2019) and hosted by Pavlovia (www.pavlovia.org). &#13;
&#13;
Procedure&#13;
Prior to the experiment, a brief meeting was organised between the participant and the researcher via Microsoft Teams, to explain the task and answer any questions. Participants were emailed a link to a Qualtrics survey, which included the participant information sheet, consent form, demographic questions and pre-screening questionnaires. If the person was deemed eligible to take part in the experiment, Qualtrics redirected participants to the experiment in Pavlovia.&#13;
Participants were then presented with instructions detailing the attentional cue elements of the task and asking them to base their judgements on the full X-shaped motion of the stimuli. Participants were asked to press M on the keyboard if they perceived the circles to “pass through” each other or press Z if they perceived the circles to “bounce off” each other, answering as quickly and as accurately as possible. &#13;
Participants completed a practice block of 10 trials, then the test session commenced. After each set of 10 random trials, participants had the opportunity to take a break. Participants were provided with a full debrief upon completion of the experiment, and all participants could enter a prize draw to win one of two £50 Amazon vouchers.&#13;
&#13;
Statistical Analyses&#13;
This study required two separate mixed ANOVAs to analyse main effects and interactions, investigating significant differences between groups and conditions.&#13;
Reaction Times. &#13;
For the first dependent variable of reaction times (RT), mean RTs were calculated for each participant in each Cue x SOA condition, representing the time taken, in milliseconds, for each participant to press M or Z on the keyboard at the end of each trial. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(SOA: 0ms vs 150ms vs 300ms x Visual-Only) mixed ANOVA was then conducted on these mean RTs. &#13;
Bounce/Pass Judgements. &#13;
For the second dependent variable of the bounce/pass judgements, the percentage of “Bounce” responses provided in each Cue x SOA condition was calculated for each participant. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 3(SOA: 0ms vs 150ms vs 300ms) mixed ANOVA was then conducted on these percentage data. Visual-Only (VO) trials were compared separately for valid and invalid conditions using a paired samples t-test. Post-hoc paired samples t-tests were also used to investigate significant differences between the 0ms, 150ms and 300ms SOA conditions. &#13;
Bounce/Pass Judgements: Pairwise comparisons. To analyse pairwise comparisons in the significant interaction of Age and Cue, responses in each SOA condition were collapsed – that is, a grand mean percentage of “Bounce” responses was calculated by averaging the percentage of “Bounce” responses in the 0ms, 150ms and 300ms trials in the Valid condition and in the Invalid condition. This produced an overall Valid and an overall Invalid mean percentage of “Bounce” responses for each participant. A 2(Age: Younger vs Older) x 2(Collapsed Cue: Valid vs Invalid) mixed ANOVA was conducted on this collapsed data to investigate differences between the proportion of “Bounce” responses in the Valid and Invalid condition for younger adults, and in the Valid and Invalid condition for older adults. In addition, 2 separate one-way ANOVAs were conducted on this collapsed data (Age as the between-subjects factor, and Valid or Invalid as the within-subjects factor) to investigate differences between younger and older adults in the Valid condition, and differences between younger and older adults in the Invalid condition (Laerd, 2015). &#13;
Significance. &#13;
An alpha level of .05 was used for all statistical tests. Any responses (judgements or RTs) that were ±3 standard deviations from the mean were considered anomalous and were removed from the analyses. Mauchly’s test of sphericity was violated for the main effect of SOA, therefore Greenhouse Geisser adjusted p-values were used where appropriate. As an a-priori power analysis determined the desired sample size for this study, and this sample size was achieved, non-significant results will not be due to the study being underpowered. Statistical analyses were conducted using SPSS (version 25, IBM).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3007">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3008">
                <text>Data/SPSS.sav; Data/Excel.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3009">
                <text>Pepper2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3010">
                <text>Robert Taylor</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3011">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3012">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3013">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3014">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3015">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3016">
                <text>Dr Helen Nuttall</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3017">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3018">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3019">
                <text>60 participants - 30 younger adults and 30 older adults</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3020">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="122" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2622">
                <text>Seeing helps our hearing: How the visual system plays a role in speech perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2623">
                <text>Brandon O’Hanlon</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2624">
                <text>2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2625">
                <text>Difficult listening conditions can result in a decrease in our ability to successfully discriminate speech. In these conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two senses in speech perception. Due to widely stimulus dependent effects, the exact timings for how far one stream can be asynchronized against the other drastically differs from account to account. Previous research has not considered viseme categories to ensure that selected speech phonemes are visually distinct. This study aims to create and validate a set of audiovisual stimuli that considers these variables for examining speech-in-noise, and to determine the SOA integration period for these stimuli. 27 online participants would be presented with either audio-only stimuli of a speaker speaking or audiovisual stimuli that also contained visuals of the speaker’s lip and mouth area as the speech were spoken. The speech was either clear or in-noise, and either displayed no stimulus onset asynchrony (SOA) or had SOA introduced at one of five different levels (200ms, 216.6ms, 233.3ms, 250ms, 266.6ms). Results indicate that, whilst the effect of visual information assisting with speech-in-noise is apparent, it is weaker of an effect than previous literature. Whilst response times imply that 250ms marks the integration window period for our stimuli, no significant accuracy changes corroborate this finding. In all, the study was successful in creating a more valid set of stimuli for testing. As power sufficiency was not met, more testing would be required to firmly cement the findings. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2626">
                <text>Linear mixed-effects modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2627">
                <text>connection, allowing for a direct, uninterrupted video feed at 1920x1080 resolution and 60 frames per second. The camera was mounted onto a stable tripod to reduce movement of the camera as much as possible during recording. DroidCam X software was used to aid the streaming of the video in real-time with little compression and loss whilst still retaining a 1080p@60fps quality level. OBS software was used for recording as it allowed the audio from the external microphone and the video from the camera to be encoded together in real-time as a single MKV file. This was beneficial, as it removed potential human error that can occur when manually stitching audio files and video files together. Therefore, we can be certain that there were no asynchronous anomalies between the audio and video streams during encoding. Another benefit of OBS software is that it reports how many frames of video are dropped when recording and encoding an MKV file, which was important to ensure that the home desktop was encoding the video in its entirely akin to a lab-calibrated desktop. No frames were reported to be dropped for all speech tokens recorded. All stimuli were recorded as MKV files initially to avoid lossy compression in the recording. A software-based x264 bit CPU encoding method was used for the recording, due to a lack of internal GPU encoding method (such as Nvenc encoding) on the home system. &#13;
After the initial recording, the speech tokens were edited in length and converted to mp4 files at a resolution of 1280 x 720p and a frame rate of 60 frames per second. As the study would be completed on participant’s laptops or desktop systems and using their internet connection, we cannot ensure that all participants are using a device with a 1920 x 1080p resolution screen. By reducing the resolution of files to 720p, all potential participant resolution sizes can be accommodated whilst ensuring all participants view the files at the same resolution. For audio-only conditions, the video of the lips was overlayed with a plain black PNG image file. This kept the audio-only stimuli in video format rather than export the file as an mp3. Regarding the inability to control the internet connection speeds of each participant, the experiment was set to download all stimuli as browser cache before it began, ensuring that there were no latency differences.&#13;
Audacity software (Audacity Team, 2021) was then used to rip the audio from the MKV files to be edited as WAV files in Praat software (Boersma &amp; Weenink, 2021) for the creation of speech-shaped noise. First, a sentence using English words – ‘His plan meant taking a big risk’ - was recorded to provide a base for the speech-shaped noise. White noise was then produced using Praat’s white noise generator. The noise was brought down to an intensity tier, then an amplitude tier. This was then multiplied with the sentence above to create speech-shaped noise. Praat was then used to combine the speech-shaped noise with the speech-in-noise conditions at a speech to noise ratio of minus 16dB. This was done using a Praat script developed by McCloy (2021). Finally, Audacity was used again to ramp up the start and ramp down the ends of all audio files for every condition. The audio was then stitched back onto the MP4 files. &#13;
For the conditions where the onset of the stimuli was asynchronous, Lightworks was again used to displace the audio ahead of the onset of the speech token using exact frames of the video footage (12, 13, 14, 15, and 16 frames per second) which corresponded with the stimulus onset asynchrony levels of the relevant conditions. The result was 42 stimuli in MP4 format, representing three speech tokens (Ba, Fa, and Ka) for each of the 14 conditions presented to the participant. These were uploaded to a GitHub repository to be accessed by Pavlovia during the experiment. &#13;
Procedure&#13;
Participants were first given a participant code and a link to the online Qualtrics consent and screening forms via email. A copy of the participant information sheet was displayed at the start of the Qualtrics questionnaire to remind participants of the study to ensure informed consent was given. Participants were also reminded at this stage to ensure that they were in a quiet room with no background noise, as well as to load the experiment on either Microsoft Edge, Google Chrome, or Mozilla Firefox internet browsers on a laptop or desktop computer. They were explicitly told not to open the experiment on any other browser, such as Safari, nor on a mobile or tablet device as these were incompatible. Once consent had been given and the participant had met the screening criteria based on their answers, they were automatically redirected to the experiment on Pavlovia. If a participant did not meet the criteria for the study, they were redirected to a message informing them of their ineligibility and they were prevented from proceeding to the rest of the experiment. To begin the experiment, participants were once again reminded of browser and device limitations and told to use headphones in a quiet room. If a participant was using an incompatible device or browser to load the experiment, they were instructed to close the experiment and re-open it on the correct device or browser before beginning. &#13;
A volume check began, in which a constant A tone played, and participants were asked to adjust the volume of their device as necessary for a comfortable auditory experience and to ensure that the audio was playing correctly at a sufficient volume level. In a typical lab setting, a set volume would be decided for all participants. However, as the study was completed online on the participant’s own devices, settling for the participant’s preferred hearing volume was preferable instead. Once complete, the spacebar would be pressed, and the tone stopped. Participants were then given a brief explanation of the task to complete. They were informed that a video would play either showing no visual information or visual information of lips moving. Meanwhile, speech would be played. Participants were told to listen carefully to the speech sound spoken, and after hearing the sound to press one of three buttons on their keyboards that corresponded with the three available speech tokens. They were reminded before and after each trial to press 'z' on their keyboard if they heard "Ba", 'x' for "Fa", or 'c' for "Ka”. Participants were told to answer as quickly as possible. If they were unsure, they were told to make a guess. &#13;
To begin, participants were given 6 practise trials to attempt the task before data was collected. This was using the clear, 0ms, audiovisual condition stimuli, with 2 trials for each of the 3 speech tokens (Ba, Fa, and Ka). A white crosshair would be displayed on the screen for 1000ms before the trial began to bring attention to the centre of the screen where the video trials would be displayed. Stimuli were shown for 2500ms, then the response screen would display. On this screen, the participants were reminded of the buttons to press for each of the three speech sounds. Only the three buttons could be pressed and pressing the buttons whilst the stimuli were still playing was not possible. The first key pressed after the stimuli were played was recorded and then would take the participant to a relay screen, where they would be informed to press the spacebar to continue. Upon pressing the spacebar, the white crosshair would return, and the next trial began.&#13;
After completing the practice, the participant was reminded of the task details once more before the experiment began for real. A total of 546 trials (not including the practise trials) were completed. The order of the trials and conditions was completely random to counterbalance any potential order bias. Every 42 trials, a broken screen would appear. This screen told the participant to take a short break before continuing with a press of the spacebar. If the participant did not wish to take a break, they were permitted to continue with a spacebar press immediately. There was a total of 12 breaks in the experiment. After each break, participants were asked a basic mathematics question, for example: ‘What is 3 +2?’. Participants could only proceed to the next chunk of trials if they responded with the correct answer. This was put in place to ensure that participants were continuing to pay attention to the experiment. Upon reaching the end of the final trial, participants were shown an ending screen where they were informed that the experiment had ended. Participants were also informed to email the primary researcher for debriefing information. Upon completing the study, participants could close the browser tab or window down and all data would remain recorded on the Pavlovia system. &#13;
If a participant closed the browser tab or window during the experiment, partial data would be recorded up to the last trial that they responded to. If this was by mistake, participants could open the experiment again and restart. However, progress would not be saved, and the participant would have to start the experiment again from scratch. Using the same participant code would not overwrite the participant’s previous data, and instead created a new participant dataset. Full datasets were used over the partial dataset in this case, unless no full dataset was recorded for a participant. &#13;
Analysis&#13;
Descriptive statistics were first gathered from each condition for both the accuracy ratings and the reaction times. The assumptions of linear and generalised linear mixed-effects models were tested, including residual plots to check for linearity, quantile-quantile plots for normality, assessing the levels of multicollinearity between stimuli type, speech type, and stimulus onset asynchrony levels using variance inflation factors, and ensuring the assumption of homoscedasticity is met. &#13;
Using both lmerTest (Kuznetsova et al., 2020) and lme4 packages, a combination of both linear mixed-effects regression model (LMER) analyses for the response time scores and generalized linear mixed-effects regression model (GLMER) analyses for the accuracy scores were conducted. LMERs were chosen instead of repeated measures generalised linear models like ANOVA tests because it considers random effects that may be present across all 546 trials on a participant-by-participant basis. As accuracy is inherently bound – due to either being accurate or inaccurate only – it can be argued to be categorical. Therefore, GLMERs were used for accurate analyses to ensure that assumptions of categorical dependent variables in mixed-effects models are met. For the LMER analyses, there were two models. Model 1 used response times as the dependent variable, modelled with stimuli type and speech type as fixed effects. The interactive effect between stimuli type and speech type was also included in the model. Model 2 used response times as the dependent variable, modelled with speech type and stimulus onset asynchrony timings as fixed effects. The interactive effect between speech type and stimulus onset asynchrony timings was also included in the model. &#13;
The GLMER analyses also had two models. Model 1 used accuracy as the dependent variable, modelled with stimuli type and speech type as fixed effects, including the interactive effects between the two fixed effects. Model 2 used accuracy as the dependent variable, modelled with speech type and stimulus onset asynchrony timings as fixed effects. Again, interactive effects were included. For all four analyses, the speech sound token used (Ba, Fa, or Ka), participant age, and the participant ID were all included as random effects in the respective models.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2628">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2629">
                <text>Excel File</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2630">
                <text>O’Hanlon2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2631">
                <text>Stephanos Mosfiliotis</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2632">
                <text>Open (unless stated otherwise)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2633">
                <text>None (unless stated otherwise)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2634">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2635">
                <text>Data </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2636">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2637">
                <text>Dr Helen E. Nuttall</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2638">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2639">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2640">
                <text>48 participants (11 male, 14 female, 2 non-binary)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2641">
                <text>Quantitative</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="102" public="1" featured="0">
    <fileContainer>
      <file fileId="69">
        <src>https://www.johnntowse.com/LUSTRE/files/original/95005cf8d8749a05d25303ac63248ba7.pdf</src>
        <authentication>30840414bccfa352a460d451969fdc9f</authentication>
      </file>
      <file fileId="71">
        <src>https://www.johnntowse.com/LUSTRE/files/original/174819714ee258dfb13c0fa7a6ace304.csv</src>
        <authentication>b6253d1266ff4742351c2d4c4f8a73c4</authentication>
      </file>
      <file fileId="72">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8e409cbed00f3a76a3d22f879e2a2f34.csv</src>
        <authentication>daeb9c288d735fb09af0501cee1095a4</authentication>
      </file>
      <file fileId="73">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f23857f7385dc20a585dbf7e73125224.csv</src>
        <authentication>44e19fb4059e8badfced3d17ca965b8c</authentication>
      </file>
      <file fileId="74">
        <src>https://www.johnntowse.com/LUSTRE/files/original/16b37ffbcc5b229554cf3d83269cb255.csv</src>
        <authentication>e017b75af5cef3fd4f6aff3c9addce1c</authentication>
      </file>
      <file fileId="75">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e801d8f94428315c35ca6cb346277f2a.csv</src>
        <authentication>3798e9838e1c4b39a4550698cacb927d</authentication>
      </file>
      <file fileId="76">
        <src>https://www.johnntowse.com/LUSTRE/files/original/213b2ed462d59dbc519e38d61bd28ce0.csv</src>
        <authentication>99511941f8d43496073e0ddb9c73955c</authentication>
      </file>
      <file fileId="79">
        <src>https://www.johnntowse.com/LUSTRE/files/original/1da7d953dc7cb556a9ef8b6fa9d144a0.doc</src>
        <authentication>df30a3da04823c1894e534bd62de7b14</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2316">
                <text>Running Memory Span Development: The Input Mechanism and Hebb effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2317">
                <text>Yu Xie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2318">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2319">
                <text>It is unclear whether active strategy or passive strategy is used and whether the Hebb effect is elicited in the running memory task. The aim of this study was to explore the input mechanism and the Hebb effect in the running memory task via a developmental study. Children were asked to perform four working memory tasks: counting span task, free recall task, Hebb digit task, and running memory task. In order to explore the Hebb effect in the running memory task, the last three digits of every third list were repeated. The results suggested that running memory was a recency-based phenomenon and the Hebb effect is elicited in children. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2320">
                <text>Participants &#13;
Fifty-seven Chinese primary school students (23 female, 34 male), aged between 7 and 13 years (Mean = 9 years 6 months; SD = 1.754) took part in the present study. The children were recruited from Grade one to Grade six at Tianyi School in Xuancheng City. Chinese was the first language of all children. All the children completed a 45-minute testing session, which involved four memory tasks. At the end of the test, children received a notebook as a small gift of appreciation for taking part in the present study. &#13;
Materials &#13;
The experiment was presented using SuperLab 4.0 on a Sony Laptop with a 14-inch colour screen. The responses of participants were recorded by the tester on answer sheets. Every child completed a counting span task, a free recall task, a Hebb digit task, and a running memory task.&#13;
Counting span task. The counting span arrays were developed from Towse and Hitch (1995) and consisted of equal number of target triangles and non-target squares. The target triangles were red, approximately 30 mm in length, and the non-target squares were blue, approximately 28 mm in length. The number of both target triangles and non-target squares varied from 3 to 9 (mean = 6). The counting span arrays were presented on the centre of the computer screen with a white background. The triangles and squares were randomly displayed at different positions in every display.  &#13;
Free recall task. For this task, 144 Chinese high-frequent two-syllable nouns (see Appendix A) were recorded by in a male’s voice at rate of 1 word per second. The words were recorded using Adobe Audition 3.0. Two practice lists and ten test lists were presented, and every list included 12 words at the rate of 1 word per second. The words were played by a computer.&#13;
Hebb digit task. All digit lists were created had the digits 1 to 9 in random order, avoiding any repetition of digits (see Appendix B). The voice of digits was recorded by Adobe Audition 3.0 at the rate of 1 digit per second. There were 2 practice lists and 24 test lists, and each list contained nine digits. Among the test lists, 16 lists were different, and the other 8 were the same – termed as Hebb list – presented on every third trial beginning on Trial 3. The 24 test lists were divided into 8 blocks, which involved 2 different lists and a Hebb list. &#13;
Running memory task. The lists included 12, 14, 16, 18, or 20 random digits from 1 to 9 (see Appendix C), which were recorded by voice. Two presentation rates were used in this task: 0.5 s per digit as the fast rate and 2.5 s per digit as the slow rate. In both conditions, there were 2 practice lists and 24 test lists. In order to test the Hebb effect in running memory task, the 24 test trials comprised 16 completely different lists, and 8 lists with the same last 3 digits which were the same and presented on every third trial. &#13;
Procedure &#13;
The experiment lasted 45 min, and every child completed 4 tasks. Each participant was seated on a chair in front of the computer screen, at a distance of 65 cm. All tasks included two practice trials for helping children be familiar with the procedure. Once children completed the practice trials and understood the procedure, they could proceed to the test trials. When children were performing the tasks, the experimenter gave no feedback about the accuracy of the words or digits. The order effect was counterbalanced as shown in the Table 1, which is a Latin Square design. Because there were two conditions in the running memory task, the fast speed and slow speed running, the tasks were counterbalanced. Therefore, in all, there were eight orders in the present study, and all children were equally divided into eight groups based on the eight orders. When participants completed each task, they were given sufficient time to rest. &#13;
Counting span task. The children were informed to the counting and recall tasks. Before every trial, a fixation symbol was displayed on the centre of screen for 0.5 s. When the target triangles and non-target squares were presented, participants were required to count the red triangles aloud, and repeat the final number. Once the children repeated the last number, the experimenter pressed the keyboard to show the next display, and the counting speeds were recorded by the computer automatically. There were three trials in every level and every trial included the n + 1 displays in level n. For example, participants counted 2 displays in level 1 and 3 displays in level 2. The final level was level 4, which contained 5 displays. After 2 to 5 displays, children were asked to report all the final numbers of red target triangles in the previous displays. If a child failed to recall correctly for at least two of the three trials, the counting span task was ended at that level; otherwise, they could progress to the next level. &#13;
Free recall task. Children were required to listen to some words, and repeat them as many as possible in any order, after the 12th word. The experimenter wrote down the responses of participants on answer sheets. If the children could not report a new word within 30 s, the experimenter would proceed to the next trial. &#13;
Hebb digit task. The procedure for the Hebb digit task was developed by Hebb (1961). Children were asked to listen to every list, and report all digits in the right order. Children reported the digits orally, and the experimenter recorded the response on an answer sheet. Because the running memory task also involved Hebb lists, 48 children were asked whether they were aware of any regular pattern in the digit tasks after they completed both Hebb digit task and running memory task. Only 5 participants noticed the repetition in the running memory and Hebb digit tasks.&#13;
Running memory task. Children were made to listen to some digits, different from those in the Hebb digit task; they were required to repeat the last three digits rather than all digits in the list. Two conditions were set to counterbalance the order effect: half of the children were administered the fast rate condition first and the other half were administered the slow rate condition first.&#13;
Scoring&#13;
Counting span task. Counting errors and counting speed were recorded and the scoring method used is the partial-credit unit scoring prescribed by Conway et al. (2005). Firstly, the correct items in each sequence were counted. If all items were correct in a sequence, this sequence was given one point. Otherwise, the score of a sequence was based on the proportion of correct items. Finally, the counting span of a participant was calculated as the sum the scores for all sequences. &#13;
Free recall task. The scoring method used was the one prescribed by Tulving and Colotla (1970), which involved the calculation of intratrial retention interval (ITRI). The ITRI value was the number of items between the presentation and the reported items. For instance, if the sequence is A, B, C, D, E, F, and G, and a participant reported G, F, and A. The ITRISs for the items were 0, 2, and 8, respectively. Before calculating the ITRI, the digit span of the Hebb non-repeating lists was calculated for every child. If the digit span of a child was 5, the item would be classified as a word from primary memory when the ITRI was 5 or less, whereas the item would be classified as a word from the secondary memory when the ITRI was 6 or more. &#13;
Hebb digit task. Every digit recalled correctly at the correct position was scored one point. The score of the non-repeating lists was the mean score of each non-repeating list, and the score of the repeating lists was the mean score of each repeating list. &#13;
Running memory task. The score for the running memory span was calculated using the mean number of digits in the right positions. If 3 digits were recalled in correct sequence, the score was 3; if the sequence of 2 digits (for example the first and second digit, the second and the third digit, or the first and third digit) was in the correct serial order the score was 2; if there was a single digit in the correct position, the score was 1. Similar to the Hebb digit task, the scores for non-repeating and repeating lists were separated.  &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2321">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2322">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2323">
                <text>Xie2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2324">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2325">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2326">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2327">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2328">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2329">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="98" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2241">
                <text>The Impact of Sleep Patterns on Emotion Regulation in Taiwanese Adolescents</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2242">
                <text>Jhih-Ying, Chen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2243">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2244">
                <text>Emotion regulation has been shown in a number of studies to be related to sleep, which often suggested that good sleep quality leads to better emotion regulation. However, research which has empirically documented the link between individuals’ specific sleep patterns/circadian types and emotion regulation among adolescents is scant. Therefore, the aim of this study attempts to explore whether there is an interaction between circadian types and the corresponding peak time on emotion regulation. Participants were 204 boys and 148 girls, who were from 13 to 16 years of age. The present study involved three questionnaires and two modified emotional Stroop tasks, including Facial-Emotional Stroop task and Lexical-Emotional Stroop task, as the assessment of emotion regulation. The analysis of the questionnaires and experiments was conducted through a series of multivariate ANOVA analyses in order to indicate whether there is a main effect of two independent variables or interactions on two emotion regulation. The results showed three main findings. Firstly, ‘morning people’ committed more error on facial tasks than ‘evening people’. Secondly, participants who attended the tasks in the afternoon had faster reaction times on Lexical task than who were tested in the morning. Thirdly, the interaction between circadian types and the corresponding peak time only showed in the evening group. To sum up, this study might be of importance in explaining the relationship between sleep patterns and emotion regulation in adolescents. Nevertheless, further studies for adolescents in investigating circadian types in relation to emotion regulation are needed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2245">
                <text>sleep patterns, circadian types, morningness-eveningness, on/off-peak time, emotion regulation, cognitive control, adolescents</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2246">
                <text>Materials&#13;
Each participant was asked to complete three online questionnaires about sleep and mood as well as two experimental tasks about emotion regulation. Three questionnaires had been translated into a Chinese version and inspected by a native Chinese-speaking professor in the Department of Psychology at Lancaster University. &#13;
	Sleep Measures.&#13;
Circadian Types Questionnaire. Participants were given a Morningness-Eveningness Questionnaire (MEQ) (Horne &amp; Östberg, 1976) to assess when their biological clock can achieve peak alertness, which indicates the better timing for people to behave more efficiently in their work and cognitive, behavioural and emotional functioning (see Appendix A). Three groups were be categorized based on the MEQ score: score &gt; 58 for the morning type, 42 &lt; score &lt; 58 for the Intermediate type and score &lt; 42 for the evening type.&#13;
Sleep Quality Questionnaire. To assess whether participants have sleep dysfunction, participants were also asked to fill out the Pittsburgh Sleep Quality Index (PSQI) (Buysse, Reynolds, Monk, Berman &amp; Kupfer, 1989), which elicited information concerning their sleep quality (see Appendix B). The higher score the participants gain, the poorer sleep quality they have. This score can be used to examine whether people's sleep quality can influence their emotion regulation ability. &#13;
Mood Measurements.&#13;
Emotional Problems Questionnaire. Participants needed to fill out the Depression Anxiety Stress Scale-21 (DASS-21) (Antony, Bieling, Cox, Enns &amp; Swinson, 1998), which is a self-reported measure to record their mood during over one recent week (see Appendix C). There were three dimensions of negative mood in this questionnaire, including depression, anxiety and stress. Each dimension had an independent score, with a higher score indicating more emotional problems. In this study, three sub-scores were added together to produce a composite measure of emotional difficulties.&#13;
Emotion Regulation.&#13;
In addition to the questionnaires, participants were requested to complete two modified “Emotional Stroop Tasks”, including the Lexical-Emotional Stroop Task and the Facial-Emotional Stroop Task, as the assessment of their cognitive control in response to emotional stimuli (Isaac, Vrijsen, Eling, van Oostrom, Speckens &amp; Becker, 2012).&#13;
Lexical-Emotional Stroop Task. The experiment stimuli consisted of three kinds of emotional words, namely positive, negative and neutral words, each of which had five presentative words (see Table 1), and each word was printed in four colours (blue, green, red and yellow). In order to assess the emotion regulation ability, participants were asked to classify the colour by pressing a different button as fast as they can. For example, when participants see a blue or green word they have to press “Q”, whereas when they see a red or yellow word they have to press “P”. Before presenting the stimulus, a fixpoint lasted 200 ms and was followed by the presented stimulus, which lasted 2000ms to make sure that participants had enough time to react. All emotional-colour words were randomly presented during this task. After participants press the key, feedback showed whether the response was correct, which lasted 500 ms (see Figure 1). Before the 30 real trials, there was a clear instruction about this task and then each participant had six trials for practise to ensure that they indeed understood how to operate this task. All stimuli were translated into Chinese and appeared in font DFKai_SB and in font size 96. The projected stimuli came out on the computer screen and colour words appeared against a black background.&#13;
Facial-Emotional Stroop Task. A total number of stimulus was 160 emotional faces which were composed of 10 different identities (5 males and 5 females) x 4 emotions (happy, neutral, angry and sad) x 4 Stroop colours (blue, green, red and yellow) (see Figure 2). Emotional faces were selected from Taiwan Corpora of Chinese emotions and relevant psychophysiological data (Chen, Zhou &amp; Zeng, 2013). It could reduce the cultural difference effectively when Taiwanese participants took the Facial-Emotional Stroop task. As the same as the execution in the Lexical-Emotional Stroop task, participants were also requested to do colour classification by pressing different buttons as fast as they can. For instance, when participants see a blue or green facial expression, they have to press “A”, whereas when they see a red or yellow emotional face they have to press “L”. Before stimulus appeared, a fixpoint showed and lasted 200 ms, which was then followed by the presented stimulus, which lasted 2000ms, to ensure that participants had enough time to respond the stimuli. The Stroop trials consisted of 30 real trials and were randomized per participant. Participants had six extra trials to practice as well before the real trials. Within the trials, participants saw feedback to tell them whether the response was correct for the last trial, which lasted 500 ms (see Figure 3). All facial stimuli were cropped, free from hair or other external accessories that could prevent any distractions during the task. The projected stimuli showed on the computer screen and the coloured facial expression appeared against a black background. Both Lexical and Facial stimulus presentation and response collection were programmed by using PsyToolkit on the website (Stoet, 2010) (see Appendix D and E) and run on Windows computers.&#13;
Table 1&#13;
Stimuli from the Lexical-Emotional Stroop Task&#13;
Positive	Neutral	Negative&#13;
快樂 (Happy)&#13;
被愛 (Beloved)&#13;
滿足 (Satisfaction)&#13;
自豪 (Pride)&#13;
舒服 (Comfort)	無聊 (Boredom)&#13;
平靜 (Calmness)&#13;
驚訝 (Surprise)&#13;
疑惑 (Confusion)&#13;
害羞 (Shyness)	生氣 (Anger)&#13;
焦慮 (Anxiety)&#13;
厭惡 (Disgust)&#13;
恐懼 (Fear)&#13;
悲傷 (Sadness)&#13;
 &#13;
Figure 1. The diagram of Lexical-Emotional Stroop Task. In this example, the stimulus is a word of Blue Happy.&#13;
 &#13;
Figure 2. Sample happy male stimuli used from the Facial-Emotional Stroop Task.&#13;
&#13;
 &#13;
Figure 3. The diagram of Facial-Emotional Stroop Task. In this example, the stimulus is a male’s face of Blue Happy.&#13;
Procedure&#13;
This study was approved by the director of the Counselling Department in Mingder High school and combined with the counselling curriculum. All students’ parents were provided with the information sheets (see Appendix F) about this study and an opt-out consent form (see Appendix G) one week prior to it. Only parents who did not want their child to participate in this study needed to sign and return the opt-out consent form. However, no opt-out consent form was returned. Participants were tested in a computer lab, with the researcher and their counselling teacher present. In order to balance the number of classes with the time of test, half of the classes per grade were tested in the morning (8 a.m. to 9 a.m. or 9 a.m. to 10 a.m.), and the others were tested in the afternoon (2 p.m. to 3 p.m. or 3 p.m. to 4 p.m.) (see Table 2). The duration of participation lasted around 45 minutes. Before the beginning of the study, the research topic and aims were presented on each computer screen. Participants were provided an opportunity to ask questions, and then the researcher asked whether anyone was not willing to attend this study. None of the participants were blind as to the aim of this study. Then, participants were given the links to the experiments and questionnaires; they needed to key the links onto the browser and start the study. In order to effectively use their time, participants were requested firstly to complete two Emotional Stroop tasks. Following the experiments, participants were instructed to fill out three questionnaires.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2247">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2248">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2249">
                <text>Chen2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2250">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2251">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2252">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2253">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2254">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2255">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2256">
                <text>Judith Lunn&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2257">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2258">
                <text>Developmental and Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2259">
                <text>352 Taiwanese adolescents </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2260">
                <text>MANOVA, ANCOVA, ANOVA, chi-square, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="96" public="1" featured="0">
    <fileContainer>
      <file fileId="56">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d4aa9430aa3e2bb18b1516d585ff40b0.pdf</src>
        <authentication>12831ea7f25dff685c81241732e4b679</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2196">
                <text>The Relationship Between Perspective-taking, Lie Detection and Self-construal Among Taiwanese</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2197">
                <text>Wen-Hsuan (Macy) Su</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2198">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2199">
                <text>“Theory of Mind” (ToM) refers to an ability, which allows us to understand what other people may believe, think, know, feel. Also, ToM is considered to play an essential role in social interaction. Evidence suggests that improved ability to understand others’ mental states through training can also improve our ability to generate lies and understand what kind of situations people may lie. In addition, previous studies point that there are differences in lie-telling and perspective-taking between individualistic and collectivist cultures. Therefore, the current study aimed to investigate whether there is a relationship between the perspective-taking, lie detection and self-construal (individualism and collectivism). Data were collected from 40 typically developed adults in Taiwan (M = 23.98, SD = 2.99). Each participant was asked to complete three computer-based tasks, namely a perspective-taking task; a lie detection task, and a questionnaire of Auckland Individualism and Collectivism scale (AICS). The result showed that there is no relationship between the ability of perspective-taking and lie detection. Also, the people scored higher individualism will show better performance on pointing out truths, but worse on detecting lies. It might relate to the “truth bias”, which means that people will typically assume or believe others are telling truths rather than lies, especially distinguishing on individualists. However, because cultural effects such as language differences and self-construal might affect individuals’ performances on instances of ToM use, the current study suggests that people might need to use different cues to detect lies in a truth-versus-lie judgment between different cultures.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2200">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2201">
                <text>Participants&#13;
	Data were collected from 40 typically developing adults from an opportunity sample in Taiwan. The entire group was composed of 20 males and 20 females between the ages of 18 and 30 (M = 23.98, SD = 2.99). All participants stated that they were Taiwanese, speaking Chinese/Taiwanese Mandarin as their first language, with normal vision or vision which had been corrected to normal. All of the participants had not been diagnosed with any neurological or developmental disorders. The data of participant No.40 was excluded from analyses because the participant only responded with the same positive answer to each question in the lie detection task.&#13;
	The original minimum required sample size was 44, which was determined a priori by using G*Power software. This number was calculated based on assuming a medium effect size of 0.4 and a reasonable power of 0.8. However, 40 was considered as a more suitable sample size, as the experiment consisted of two orders for the perspective-taking task, four sub-sets of the lie detection task and a questionnaire. In order to counterbalance stimuli presentation, the target sample was set to 40, as it is a multiple of eight (two times four times one); it is also the closest number to 44. All participants had been given the consent form and the information sheet to understand the contents of the project before the tests began. Furthermore, this project was approved by the ethics process from the Department of Psychology at Lancaster University.&#13;
Design and Procedure &#13;
	Each participant was asked to complete three computer-based tasks, namely, the perspective taking task, the lie detection task, and a questionnaire of the Auckland Individualism and Collectivism Scale (AICS). All of the tasks were translated or designed in the participants’ first language, in this case, Chinese/Taiwanese Mandarin. All of the tasks were presented on a laptop and participants responded by using the computer mouse. The full session took around an hour in total.&#13;
Perspective-taking Task&#13;
	The original perspective-taking task was called the “director task”, which can be traced back to the studies from Keysar et al. (2000) and Apperly et al. (2010). The present study employed a similar version of the director task which was presented in the study from Wang et al. (2016). In the instruction of the perspective-taking task, participants were given a demonstration of how to use a computer mouse to select and move the object. Later, the experimenter explained to participants that the speaker/director was standing behind the shelf and would not be able to see the objects in the blocked slots. It was impossible for the speaker/director to ask participants to move the object which was in the blocked slot (see Figure 1). Participants were asked to consider the speaker/director’s perspective and respond as quickly and accurately as possible. &#13;
	Participants had a chance to practice (6 trials) and ask questions before the start of the task. The task was divided into four blocks, and participants were allowed to take breaks between each block. There were a total 128 trials in the task, 16 of which corresponded to the experimental trials; the other 16 corresponded to the control trials and the rest were fillers. The fillers were regarded as a baseline measure for the non-perspective taking aspect of the task, such as understanding and identifying the speaker/director’s instructions. In 16 of the experimental trials, there were differences between the speaker/director’s description and the participants’ point of view. In contrast, the control trials provided a close match in terms of visual and audio stimuli, but the control trials did not impose the demand to perspective-taking. For example, as can be seen in the right-hand picture in Figure 1, if the speaker/director ask participants to move the “bigger” balloon to take the speaker/director’s perspective, participants should move the yellow balloon rather than the pink one, which was the bigger balloon from the participants’ own perspective. By the end of the task, only the number of egocentric errors committed on experimental trials were counted; the errors reflect failure to account for the director’s perspective. &#13;
&#13;
 &#13;
Figure 1. Left: An example of the control trials. Right: An example of the experimental trials.&#13;
&#13;
Lie Detection Task&#13;
	Participants were asked to watch 16 videos (each video lasting around 15~45 seconds). The videos were recorded by four volunteer models from Lancaster University. All models were Taiwanese and spoke Chinese/Taiwanese Mandarin as their first language. Each model recorded 16 videos in total which comprised four stories. Each story contained to two truths and two lies, from a first person and a third person perspective for each story (there are four versions of each story). For the story contents, there were several elements that storytellers were required to include in their stories (see Appendix A). In addition, the storytellers were given two designated elements to lie about in the lie stories. &#13;
	There was a total of 64 videos. The videos were evenly and pseudo-randomly divided into four lists. For example, participants never watched two videos of the same storylines containing lies and told from a same perspective by different storytellers in one list. Therefore, each list contained 16 unique videos. Participants watched videos from one of the lists, and at the end of each video, they were asked to identify whether they thought that the storytellers were telling a truth or a lie. To make sure the participants would concentrate on watching videos rather than just guess the answers, participants were asked a question about an aspect of each video. The questions were used as inclusion criteria, whereby only correct responses of the aspects were included in the data analysis.&#13;
Auckland Individualism and Collectivism Scale (AICS)&#13;
	The third test used was the Auckland Individualism and Collectivism scale (AICS), which was developed by Shulruf, Hattie and Dixon (2007) and was used to measure individuals’ self-construal, namely, individualism and collectivism. The questionnaire consists of 30 questions (see Appendix B and C), which includes three dimensions of individualism and two dimensions of collectivism. For individualism, the scale consists of 12 items and is divided into three dimensions, namely, responsibility, uniqueness, and competitiveness. For collectivism, the scale consists of 8 items, and two dimensions are referred to: advice and harmony. Each of the dimensions was composed of four items. The questionnaire was presented in an online form, and participants were asked to complete it after they had finished the lie detection task. The response to each question was scored using a six-point likert scale from 0 (Never) to 5 (Always). The maximum score in the individualism trial was 60, and the maximum score in the collectivism trial was 40. A higher score on each of the trials indicated that an individual was more inclined to individualism or collectivism. &#13;
	The AICS questionnaire has been shown to work in different cultures such as the United Kingdom, China, Romania and Italy (Bradford et al., 2018; Ewerlin, 2013). This means that this questionnaire can be used as a feasible measure in both individualistic and collectivistic cultures. Previous studies have mentioned that an individual can simultaneously show tendencies towards both individualism and collectivism; in other words, an individual may be able to achieve a high or low score on both subscales (Bradford et al., 2018; Shulruf et al., 2011). With this in mind, the analysis of the current study did not divide participants into two groups for individualism and collectivism. Instead, this study used the AICS questionnaire to obtain participants’ scores in individualism and collectivism, and to observe the relationship between individuals' self-construal and their ability to detect lies.&#13;
 &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2202">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2203">
                <text>Data/Excel.xslx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2204">
                <text>Su2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2205">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2206">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2207">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2208">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2209">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2210">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2211">
                <text>Jessica Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2212">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2213">
                <text>Cognitive, developmental </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2214">
                <text>40 typically developing adults </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2215">
                <text>Regression, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
