<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=15&amp;sort_field=added" accessDate="2026-05-03T15:51:45+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>15</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="196" public="1" featured="0">
    <fileContainer>
      <file fileId="218">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f9177519dc5c68194a35cb5df1d2411d.doc</src>
        <authentication>ddb02b65d50864142451fb8a56e51c8a</authentication>
      </file>
      <file fileId="225">
        <src>https://www.johnntowse.com/LUSTRE/files/original/1e48795e7a9817c81dec944c610bf3b2.doc</src>
        <authentication>03e45136274151b42c745dcc2f9956e7</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3909">
                <text>Hemispheric Lateralisation of Facial Emotion Processing: A Possible Explanation of Atypical Empathetic Responses in Children with Autism Spectrum Disorder</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3910">
                <text>Lydia Brooks</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3911">
                <text>07.09.2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3912">
                <text>Existing research suggests that children with autism are endowed with a significant delay in the lateralisation of facial emotion processing (Taylor et al., 2012), and that this delay is associated with some of the social and emotional based deficits that manifest within the disorder. The present study therefore aimed to ascertain the reliability of Taylor et al.’s (2012) findings by determining whether the strength of lateralisation for facial emotion processing differs between children with and without autism, while also determining whether this difference can explain atypical empathetic responses in children with autism. To explore these aims, an online version of the chimeric face task was administered to 11 neurotypical children and 5 children with a diagnosis of autism. The Child Empathy Quotient was completed by parents of all children, and The Autism Quotient – Children’s Version was completed by parents of children with autism. Results indicated that there was no significant difference in the strength of hemispheric lateralisation for facial emotion processing between children with and without autism, and that the strength of lateralisation did not predict a child’s level of empathy, nor did a child’s autism severity. Instead, levels of empathy were best predicted by an individual’s diagnostic status and age. The present study was therefore unable to support the finding of Taylor et al. (2012) or explain empathy deficits in the autistic population. However, the limitations identified in this study help to inform future research on the relationship between the lateralisation of facial emotion processing and empathy.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3913">
                <text>Hemispheric Lateralisation, Emotion Processing, Autism Spectrum Disorder, Empathy, Chimeric Face Task</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3914">
                <text>Participants &#13;
Participants were recruited from mainstream primary schools, wrap around care settings and specialist educational provisions in the Lancashire area, as well as via social media. A total of 22 parents completed the required questionnaires on behalf of their children, out of which 17 parents arranged a date and time for their child to complete the chimeric face task. One child, who is non-verbal and has received a diagnosis of ASD had difficulty completing the task and selected responses impulsively without looking at, or taking sufficient time to consider, the facial stimuli and the emotion it depicted. For this reason, the chimeric face task was terminated prior to completion and the child’s data was not included in the analysis. &#13;
The final sample of participants consisted of 16 children aged between 5- and 10-years-old, of which 5 had received a formal diagnosis of ASD (5 boys; Mage = 6.8, SDage = 1.48). One child with ASD had a comorbid diagnosis of hypermobility and sensory processing disorder. All children with ASD were reported to speak English at home, one child was left-hand dominant, and four children were right-hand dominant. &#13;
The remaining participants were 11 typically developing children (6 girls, 5 boys;  Mage =  7.0, SDage = 1.90), who had not been diagnosed with any neurodevelopmental disorders. One of these children was reported to speak Russian at home, however, is fluent in English. All children in the typically developing group were right-hand dominant.&#13;
Design &#13;
A two-factor between-subjects experimental design was employed to determine whether the strength of hemispheric lateralisation for facial emotion processing differs between children with and without a diagnosis of ASD. The independent variable for this research question was diagnostic status, a between-subject factor, with two groups; ASD and typically developing. Participants were assigned to one of these groups based on their diagnostic status, which was ascertained by their parent’s responses on the demographic questionnaire. The dependent variable for this research question was the strength of hemispheric lateralisation for facial emotion processing which was measured using the chimeric face task. &#13;
A three-factor mixed-subjects predictive correlational design was employed to determine whether a child’s diagnostic status, and strength of hemispheric lateralisation for facial emotion processing can predict a child’s level of empathy. The predictor variables for this research question were diagnostic status, a between-subject factor (typically developing or ASD), and the strength of hemispheric lateralisation for facial emotion processing, a within-subject factor. The outcome variable for this design was empathy, a within-subject factor, measured by the Child Empathy Quotient. &#13;
Measures &#13;
Demographic Questionnaire &#13;
Materials. The online demographic questionnaire (see Appendix A) was comprised of eight questions. Three of which required parents of the participants to input a response, these questions were used to determine the child’s age (in years), month of birth, and year of birth. The remaining questions were multiple choice, and therefore, required parents to select an answer out of 2-4 possible answer options. These questions acquired information including the child’s gender (male or female), dominant hand (left, right or don’t know/no preference), the language used in their home environment (English or other) and diagnostic status (formal diagnosis of ASD or no formal diagnosis of ASD). If the child did not speak English at home, then parents were required to input the language predominantly spoken. Parents who confirmed that their child had received a formal diagnosis of ASD were asked to input any comorbid diagnoses their child had received, so that they could be considered in the analysis. Parents who confirmed that their child had not received a diagnosis of ASD were asked if their child had received a diagnosis of any other neurodevelopmental disorders, this question was used for exclusionary purposes. &#13;
Procedure. Completion of the questionnaire took approximately 2 minutes. Following completion of the questionnaire participants were excluded from the study and unable to proceed to the next stage if they did not meet the age criterion, or if they had not received a diagnosis of ASD but had received a diagnosis of another developmental disorder. &#13;
The Child Empathy Quotient (Auyeung et al., 2009)&#13;
Materials. The Child Empathy Quotient (EQ-Child) is a parent report questionnaire composed of 27 items (see Appendix B) used to measure a child’s level of empathy. This questionnaire was developed by Auyeung et al. (2009) using the adapted version of The Adult Empathy Quotient (Baron-Cohen &amp; Wheelwright, 2004), individual items have therefore been modified and made applicable and relevant to children. The items therefore refer to behaviours, responses or difficulties commonly exhibited or experienced by children, e.g., ‘My child shows concern when others are upset’. Parents had to indicate the extent to which they agreed with each item by selecting one of the following options on a four-point Likert scale; ‘definitely agree’, ‘slightly agree’, ‘slightly disagree’, and ‘definitely disagree’.&#13;
The EQ-Child has previously been completed by parents of neurotypical children, and children with ASD, aged between 4- and 11-years-old. The pilot study conducted by Auyeung et al. (2009) yielded findings indicative of a high-internal consistency and good test-retest reliability, and the patterns of results were consistent with those found in adult research (Baron-Cohen &amp; Wheelwright, 2004). &#13;
Procedure. All parents were required to complete the EQ-Child, which took approximately 5 minutes. The order the questionnaire items were presented in remained consistent between parents and parents were unable to proceed to the next part of the study before they had provided a response for all 27 items. &#13;
Scoring. Parental responses on individual questionnaire items were converted into numerical points and summed together to calculate an empathy score for each child. For the following numbered items; 1, 4, 8, 10, 13, 14, 15, 16, 19, 21, 22, 23, 24, and 25, a response of ‘definitely agree’ equalled 2, ‘slightly agree’ equalled 1, and, ‘slightly disagree’ or ‘definitely disagree’ equalled 0. The remaining items were reverse coded. The maximal attainable empathy score was 54, the higher the score, the more empathetic a child is perceived to be by the adult completing the questionnaire. The scoring method applied in this study is consistent with the scoring method used, and detailed, in Auyeung et al. (2009). See Appendix B for the 27 items, and their corresponding item number. &#13;
The Autism Spectrum Quotient – Children’s Version (Auyeung et al., 2008)&#13;
Materials. The Autism Spectrum Quotient – Children’s Version (AQ-Child) developed by Auyeung et al. (2008) is a parent report questionnaire composed of 50 items (see Appendix C), used to quantitatively measure autistic traits in children aged between 4- and 11-years-old. The items in the AQ-Child are derived from the Autism Spectrum Quotient – Adult’s Version (Baron-Cohen et al., 2001) and the Autism Spectrum Quotient – Adolescent’s Version (Baron-Cohen et al., 2006), however, they have been revised and adapted to be pertinent to children. The items therefore refer to scenarios and behaviours that children are likely to have experienced or exhibited, e.g., ‘S/he would rather go to a library than a birthday party’. Parents indicated how strongly they agreed with each descriptive statement by selecting one of the following responses on a four-point Likert scale; ‘definitely agree’, ‘slightly agree’, ‘slightly disagree’ and ‘definitely disagree’. &#13;
Previous studies have administered the AQ-Child to parents of children with ASD, aged between 5- and 11-years-old (Auyeung et al., 2008). Administration of the AQ-Child has been reported to have excellent test-retest reliability and a high alpha and reliability coefficient. &#13;
Procedure: &#13;
This questionnaire was only completed by the parents of children with a diagnosis of ASD, all of whom were unable to proceed to the next stage of the study until they had provided an answer for all 50 items. This questionnaire took approximately 5-10 minutes to complete. The order of items remained constant between parents. &#13;
Scoring. For each child reported to have a diagnosis of ASD, a total AQ score was calculated. Total scores were calculated by converting responses on the four-point Likert scale into numerical scores and summing them together. For the following items 1, 3, 8, 10, 11, 14, 15, 17, 24, 25, 27, 28, 29, 30, 31, 32, 34, 36, 37, 38, 40, 44, 47, 48, 49 and 50, a response of ‘definitely agree’ equalled 0, ‘slightly agree’ equalled 1, ‘slightly disagree’ equalled 2, and ‘definitely disagree’ equalled 3. The remaining items were reverse scored. The higher the overall score, the greater number of autistic traits exhibited and endowed by the child. See Appendix C for the 50 items, and their corresponding item number. &#13;
The Chimeric Face Task &#13;
Materials. The chimeric face task is a widely used measure of the lateralisation of facial emotion processing. Chimeric faces are composite visual stimuli that are made by splitting two symmetrically averaged images of a face vertically down the middle and combining them together to depict a different emotional expression in each hemiface. The chimeric faces and the symmetrically averaged images used in this study derive from the work of Michael Burt (Burt &amp; Perrett, 1997; Innes et al., 2016), and are supplied by Parker et al. (2021) via Gorilla Open Materials: https://gorilla.sc/openmaterials/104636.&#13;
In the practice trail, two symmetrically averaged images of male faces and two chimeric faces were used, these faces depicted the emotions fear and surprise. A further 12 chimeric faces were used in the experimental trial, which depicted all possible combinations of the emotion’s happiness, sadness, anger and disgust. Four symmetrically averaged images of male faces depicting these emotions were also used. See Figure 2 for the stimuli used in the experimental trial. &#13;
Figure 2. The Facial Stimuli used in the Experimental Trial of the Chimeric Face Task  &#13;
Note. The 16 facial stimuli presented to children during the experimental trial of the chimeric face task, including the four symmetrically averaged faces depicting the emotions happiness, sadness, anger and disgust, and the 14 possible combinations of these four symmetrically averaged faces. Adapted from “A leftward bias however you look at it: Revisiting the emotional chimeric face task as a tool for measuring emotional lateralisation” by B. R. Innes, D. M. Burt, Y. K. Birch, and M. Hausmann, 2016, Laterality: Asymmetries of Body, Brain and Cognition, 21(4-6), p. 649, supplied by “Assessing the reliability of an online behavioural laterality battery: A pre-registered study” by A. J. Parker, Z. V. Woodhead, P. A. Thompson, and D. V. Bishop, 2021, Laterality, 26.&#13;
The participants used emotional emoticons to indicate the emotion they believed to be depicted by the facial stimuli. In the practice trial, two emoticons were used, which illustrated the emotions fear and surprise. In the experimental trial, a further four emoticons were used, which illustrated the emotions happiness, sadness, disgust and anger. The emoticons used were taken from Oleszkiewicz et al. (2017), as it was found that children aged between 4- and 8-years-old were able to accurately assign emotions to these emoticons. See Figure 3 for the emoticons used in the experimental trial.&#13;
Figure 3.The Emoticon Stimuli used in the Experimental Trial of the Chimeric Face Task&#13;
Note. The emoticon stimuli selected by the child participants to indicate which emotion they believed the facial stimuli to be depicting. The emotions depicted by the emoticons, from left to right, are; anger, disgust, happiness and sadness. Adapted from “Children can accurately recognize facial emotions from emoticons” by in A. Oleszkiewicz, T. Frackowiak, A. Sorokowska, and P. Sorokowski, 2017, Computers in Human Behavior, 76, p. 373.&#13;
Procedure. The procedure used derives from Parker et al. (2021), however, it has been adapted accordingly for its use with children. The chimeric face task was administered remotely via Microsoft Teams, a video collaboration platform. The virtual meeting was only accessible by the participant and the researcher, via a unique uniform resource locator, meeting ID and passcode. The chimeric face task could be completed on a laptop, computer or electronic tablet, and participants were required to share their screen to allow for the delivery of verbal instructions. Children were accompanied by an adult family member who was asked to refrain from engaging in any verbal and non-verbal communication with their child during completion of the task.&#13;
Prior to administration of instructions, participants completed an estimation of screen size. This involved placing a 8.56cm X 5.39cm card onto the screen and dragging a bar until the size of the card on the screen corresponded with the physical card possessed by the participant. This was to ensure that all instructions and stimuli were presented as the same size to all participants. Instructions were administered visually to the child participants, using visual-graphic symbols, example screens and visual stimuli taken from the study, to ensure that the child’s understanding of the task was not compounded by their language ability. The visual instructions were accompanied by verbal instructions, that omitted the use of vocabulary that would not typically be understood by children aged between 5- and 10-years-old. Following administration of the instructions, participants were familiarised to two symmetrically averaged faces depicting the emotions fear and surprise, and their corresponding emoticons, which would be used in the practice trial. During completion of the practice trial participants were exposed to each symmetrically averaged face and each chimeric face, twice, meaning they were exposed to a total of 8 stimuli. The practice trial was employed to acquaint the child to the procedure used in the experimental trial.  &#13;
Following the practice trial, participants were familiarised to the symmetrically averaged faces that comprised the chimeric faces used in the experimental trial, as well as their corresponding emoticons. These faces, and emoticons, depicted the emotions happiness, sadness, anger and disgust. Participants were verbally informed of the emotion depicted by stimuli and were instructed to click ‘next’ or indicate to their parent when they felt they had familiarised themselves with the stimuli presented. Participants were familiarised to the stimuli to ensure they knew which emotion each face and emoticon represented. The experimental trial was composed of four blocks. In each block the participants were exposed to the four symmetrically averaged faces, and the 14 chimeric faces, twice, meaning they were exposed to 32 stimuli per block, and 128 stimuli in total. Participants were exposed to the symmetrically averaged faces to assess their recognition of the emotion, and to the chimeric faces to determine the strength of their hemispheric lateralisation for facial emotion processing. &#13;
Before being exposed to the stimuli participants were asked to fixate on a white cross in the middle of the screen for 1000ms to ensure the child was looking directly at the facial stimuli when it appeared. This was important as the facial stimuli was only presented for 400ms. Following the presentation of each facial stimulus the participants had 10400ms to provide a response before automatically advancing to the next screen. All participants were instructed to “decide how the face is feeling and click on/point to/touch the emoji that shows that feeling”. The instruction provided differed depending on whether the child was responding using an electronic mouse, touch screen device, or by pointing and having their parent select the response for them. The latter of which was used for children who did not have access to a touch screen device, and who were not yet able to independently control an electronic mouse. &#13;
At three intervals during the chimeric face task, children were provided with the opportunity to take a break. During this break children received verbal praise and encouragement, the duration of the break was determined by the child. Administration of the chimeric face task took approximately 20-40 minutes. &#13;
Scoring. A laterality index was calculated for each child, to determine their strength of lateralisation for facial emotion processing. The laterality index was calculated by calculating the number of times the participant selected the emoticon corresponding with the emotion depicted on the right and left side of the face. The following sum was then computed for each participant 100 X (No. of right hemiface responses – No. of left hemiface responses)/(No. of right hemispace responses + No. of left hemiface responses). &#13;
Study Procedure &#13;
Ethical approval was obtained from the Lancaster University Department of Psychology Ethics Committee. Consent was received from all schools who agreed to distribute the study information to parents. Parental consent was obtained on behalf of all child participants, and oral consent was sought from the child participants during the virtual meeting. &#13;
The study was comprised of two parts, the first of which required parents to complete a series of questionnaires to provide a measure of the child’s demographic information, level of empathy, and autism severity. Parents were first presented with the demographic questionnaire to determine which additional questionnaires they were required to complete. If the parent’s responses on the demographic questionnaire indicated that their child had a diagnosis of ASD, then they were directed to, and required to complete, the EQ-Child and AQ-Child. If the parent’s response denoted that their child did not have a diagnosis of ASD they were only directed to, and required to complete, the EQ-Child. All questionnaires were completed on Gorilla (www.gorilla.sc), a cloud-based software platform for collecting data in the behavioural sciences (Anwyl-Irvine et al., 2020). All participants therefore completed the questionnaires remotely, on a personal electronic device. The questionnaires were compatible with a range of technological equipment, including a laptop, computer, electronic tablet and mobile phone. &#13;
Following successful completion of the required questionnaires, parents received an email arranging a convenient date and time for their child to complete the second part of the study, which required their child to complete the chimeric face task. The data collected during completion of the chimeric face task was linked to the parental questionnaire responses via a unique participant ID code, which was allocated to parents following confirmation of participation. Following completion of the chimeric face task, a debrief sheet and certificate was sent to the parent’s email address.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3915">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3916">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3917">
                <text>Brooks2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3918">
                <text>Ching Yee Pang&#13;
Aleeza Sulaman&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3919">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3920">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3921">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3922">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3923">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3924">
                <text>Dr Margriet Groen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3925">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3926">
                <text>Cognitive, Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3927">
                <text>The final sample of participants consisted of 16 children aged between 5- and 10-years-old, of which 5 had received a formal diagnosis of ASD (5 boys; Mage = 6.8, SDage = 1.48). The remaining participants were 11 typically developing children (6 girls, 5 boys;  Mage =  7.0, SDage = 1.90), who had not been diagnosed with any neurodevelopmental disorders.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3928">
                <text>Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="197" public="1" featured="0">
    <fileContainer>
      <file fileId="219">
        <src>https://www.johnntowse.com/LUSTRE/files/original/625c88f2083a5146c896349cc929f8b5.csv</src>
        <authentication>43c905648f9c0c844dc9684798ccc8af</authentication>
      </file>
      <file fileId="220">
        <src>https://www.johnntowse.com/LUSTRE/files/original/30693a3ae2e634ecc0a3a0124be37b76.doc</src>
        <authentication>8c0cc15cbc1afa953ba348b369b7442b</authentication>
      </file>
    </fileContainer>
    <collection collectionId="9">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="499">
                  <text>Behavioural observations</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="500">
                  <text>Project focusing on observation of behaviours.&#13;
Includes infant habituation studies</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3929">
                <text>Does Noise Affect How Children Learn Grammar in the Classroom?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3930">
                <text>Ashlynn Mayo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3931">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3932">
                <text>In a classroom environment noise can be a significant impediment, obstructing and distorting essential information being taught. Extensive prior research consistently indicates that noise has a detrimental impact on learning, those who learn in noise retain and comprehend far less information than their counterparts who learn in quiet. To date there are no studies that investigate the effect of noise on learning grammar specifically -the primary aim of the current study is the address this research gap. This paper details our recruitment of 16 children aged 7– 12 through the Babylab database at Lancaster university. This study employed a between participants design, where children completed a three-part audio evaluation, engaged in an artificial grammar paradigm, and a undertook a working memory task. The artificial grammar paradigm was employed as our primary assessment tool, participants were exposed to the grammar either in noise or in quiet. Results were analysed using a multiple regression with total grammar score as the dependent variable and age, gender, condition, and working memory as the independent variables. In contrast the prior research, our results revealed that the effect of the independent variables on the dependent variable was statistically nonsignificant, proving our null hypotheses to be true. These findings suggest that background noise does not affect how children learn grammar in the classroom challenging the existing understanding that noise negatively impacts learning.&#13;
Analysis&#13;
In order to answer our research questions we will carry out a multiple linear regression using IBM SPSS Statistics (version 28). We will be employing a between participants design where we will examine the effect of background noise (noisy and quiet) on total grammar score. Our additional independent variables will be working memory, gender and age. If we find a statistically significant result with regard to grammar score then we will be conducting a post hoc test on grammar score breaking them down into aX and Yb in order to determine the difference between the two types of grammar.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3933">
                <text>Grammar, Noise, Working Memory</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3934">
                <text>Participants&#13;
16 children aged 7-12 years old participated in this study, unfortunately due to technical issues 5 participants’ data were excluded leaving 11 children’s data to be included in the analysis (M=8.64, SD=1.63, female=7, male=4). Children were recruited through the Lancaster University Babylab database and by flyers posted on social media and local community. A requirement of the current study was that children be English speaking monolinguals, this is because an abundance of research has indicated that those who can speak two or more languages are at a far greater advantage when it comes to new language acquisition (Antoniou et al., 2015). Therefore, in order to control the likelihood of extraneous variables such as this we ensured all participants were English speaking monolinguals only.&#13;
Furthermore, children were also required to have normal vision or corrected to normal vision. To rule out hearing loss all children had to pass an otoscope inspection, a tympanometry test, and a pure tone hearing screening at 20dB in the standard frequencies (250Hz-8kHZ).&#13;
The current study employed a between participant design whereby subjects were allocated to a condition based on their age and gender -age was categorised into 7-9 and 10-12 in order to ensure that there were as equal an amount of males and females in each condition over all ages. It is crucial for the validity of the study that children are only exposed to the artificial grammar paradigm once or data will be rendered unreliable as they will have an unfair advantage over the other participants.&#13;
Ethics for the current study have been obtained from the Departmental Ethics Committee (DEC), Psychology Department at Lancaster University.&#13;
Materials&#13;
This study was conducted within a double walled soundproof chamber at Lancaster University’s PELiCAN lab where the participant sat at a desk with a monitor placed in front of them. A secondary researcher was present in the lab for health and safety purposes.&#13;
Consent and assent forms, a background questionnaire on the child’s hearing, audio evaluation results, and task data were all recorded on REDCap (Harris et al., 2009; Harris et al., 2019): a GDPR compliant application for data capture.&#13;
Travel compensation was provided: £5 within 40 minutes and £10 for over 40 minutes.&#13;
Furthermore, children received a certificate and book of their choosing from the PELiCAN lab.&#13;
The audio evaluation&#13;
This study was comprised of three sections: an audio evaluation whereby an otoscope examination, tympanometry test, and audiogram using Affinity Suite were conducted. During the audiogram participants wore headphones and had a handheld button that they pressed when they heard the pure tone sounds.&#13;
The Artificial Grammar Paradigm&#13;
After passing the hearing evaluation the children completed an artificial grammar paradigm previously used by Torkildsen et al. (2013) consisting of two grammatical forms: aX and Yb. The paradigm was presented in the form of an alien game whereby the children helped an alien learn a new language. We presented the paradigm in this format in order to increase engagement; children are motivated by the colourful and curious nature of a game (Blumberg&#13;
et al., 2019) and therefore we are far more likely to obtain more data (less drop outs due to fatigue and boredom). This task was created in PsychoPy and hosted by Pavlovia.&#13;
The background noise&#13;
In order to imitate the background noise of a classroom speech shaped noise (SSN) (e.g. Leibold et al., 2013) was emitted through a speaker on the back wall of the booth behind the child. The background noise speaker was 180 degrees on the azimuth, and the target speaker was 0 degrees on the azimuth. Background stimuli was calibrated so that for the quiet condition the stimulus was emitted at 35dB and for the noisy condition it was played at 65dB.&#13;
The n-back Test of Working Memory&#13;
Lastly, we conducted the 1-back test of working memory (Owen et al., 2005) which was also created on PsychoPy and hosted by Pavlovia&#13;
Procedure&#13;
Prior to the commencement of the study guardians gave informed consent (See Appendix C), if the child was 11 or older they gave informed assent in addition to this (See Appendix D). Guardians were then asked to complete a short background questionnaire pertaining to their child’s hearing (See Appendix H). Whilst they completed these forms the researcher began the study inside the booth; using Affinity suite it was ensured that the microphone inside the booth was turned on in order for the guardian to be able to hear what was going on inside the booth by using the headphones places outside the booth. As aforementioned, the audio evaluation consisted of three tests, these were administered in the booth by the researcher and took up to 15 minutes. Firstly, an ear inspection was conducted using an otoscope, participants were required to have clear ears free of perforations and/or any infection. Secondly, a tympanometry test was conducted whereby participants must have passed with type A (normal) results. Lastly a pure tone hearing screening was conducted at 20dB in the standard frequencies (250Hz-8kHZ). The researcher left the booth for the audiogram in order to run the program on the desktop outside the booth while the child remained inside the booth.&#13;
The task consisted of 11 blocks comprised of 4 exposure items and 2 test items, before the test portion children were exposed to 4 examples of what is expected of them, they had to get these right in order for the software to move onto the test phase. If children did not get these right the researcher explained and promoted them to pick the correct answer. Children were required to press ‘x’ on the keyboard for right and ‘n’ on the keyboard for wrong, answers were saved and recorded automatically on Pavlovia. The software was run by the researcher from outside the booth and was mirrored onto the desktop inside the booth.&#13;
Lastly, we conducted the 1-back test of working memory (Owen et al., 2005), where children were exposed to a number of animal sounds and were required to record weather the stimuli was a new sound or one they had heard before, ‘x’ represented repeated sound and ‘n’ represented a new sound, participants had to ensure they made a button press after each noise. Once all tasks were completed the researcher collected the child from inside the booth and a short verbal and written debrief was given to the child and guardian. Guardians were given and signed for their travel compensation, and children received a certificate from the PELiCAN lab and were able to choose a book of their liking. Participants were walked back to their car or bus to bring a close to the visit.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3935">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3936">
                <text>Text/Word.doc&#13;
Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3937">
                <text>Mayo2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3938">
                <text>Tejasvita Rajawat&#13;
Audred Visaya</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3939">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3940">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3941">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3942">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3943">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3944">
                <text>Dr. Hannah Stewart</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3945">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3946">
                <text>Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3947">
                <text>11 (7 females, 4 males)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3948">
                <text>Regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="198" public="1" featured="0">
    <fileContainer>
      <file fileId="232">
        <src>https://www.johnntowse.com/LUSTRE/files/original/04baf21e1843f00a20467503c8128264.doc</src>
        <authentication>913ba7c0598aa595ba198c32e4af7740</authentication>
      </file>
      <file fileId="233">
        <src>https://www.johnntowse.com/LUSTRE/files/original/7c17fd1c45a462a42c2a461e0b58286d.doc</src>
        <authentication>913ba7c0598aa595ba198c32e4af7740</authentication>
      </file>
    </fileContainer>
    <collection collectionId="9">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="499">
                  <text>Behavioural observations</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="500">
                  <text>Project focusing on observation of behaviours.&#13;
Includes infant habituation studies</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3949">
                <text>Prospect theory and intermediate audience: the effects of context on behavioural intention</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3950">
                <text>Wai Man Ko </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3951">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3952">
                <text>Prospect theory predicts how people react to gain or loss-framed outcomes in dilemma situations, where the potential consequence of the choice is framed as a gain (e.g., lives saved) or as a loss (lives lost). This gain-loss framing communication strategy, derived from the theory, has been applied in many contexts, from promoting the use of reusable coffee mugs to vaccination compliance, with loss-framed appeals being found generally to be more persuasive than gain-framed appeals in the context of promoting vaccination. The current study focused on exploring whether these well-established effects persist when an intermediate audience is exposed to gain/loss-framed messaging, using influenza (flu) vaccination intentionality as an outcome. Intermediate audiences refer to those who are evaluating the gains and losses from the message on behalf of someone else (the ultimate audience), while normal audiences are those making decisions on their own behalf. Two hundred participants were recruited for an online, between-subject study, in which participants were split into two audience conditions and within which they were further split to view a gain-framed or a loss-framed message. Their subsequent behavioural intentions were measured as the outcome, with age as a potential moderating factor (and emotional attachment as a potential mediator exclusively for the intermediate audience condition). Results indicate that neither age nor emotional attachment are significant moderators or mediators. Loss-framed appeal enjoyed a persuasive advantage over the gain-framed appeal only in the intermediate audience condition. Possible interpretations of results, along with potential further directions of research, are discussed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3953">
                <text>Prospect theory, gain/loss framing, intermediate audience, communication research, health communication, vaccination</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3954">
                <text>To test the outlined hypotheses, our current study took the form of an online Qualtrics questionnaire (see appendix B for questions) where the questionnaire would introduce participants to one of the audience conditions and view the appropriate version of the manipulated message before moving on to answering some items measuring their behavioural intention and emotional attachment. The study has a 2 (intermediate/normal audience condition) X 2 (gain/loss-framed appeal) design with emotional attachment as a potential mediating variable for the intermediate audience condition and behavioural intention as the outcome variable for all audience conditions. &#13;
Participants&#13;
We recruited 200 healthy adults based in the UK on Prolific, an online research participant recruitment platform. Participants have provided consent and completed the study remotely with their personal devices. Their unique Prolific ID was used in this study as the only identifier, which cannot be traced back to them personally. Participants were compensated monetarily for their participation.&#13;
We randomly assigned our participants to one of the four audience conditions with 50 participants each: the normal gain-framed condition, the normal loss-framed condition, the intermediate gain-framed condition, and the intermediate loss-framed condition.&#13;
Questionnaire design&#13;
Consent&#13;
The participant gave consent to participate in the study with the Qualtrics consent element so that participants can check a box for each item. There were seven items that the participants had to check one by one before commencing the study. Responses which failed to provide a full response in the consent item would be removed from the study.&#13;
Demographics&#13;
For demographics, we have recorded the participants' age and gender for the records. As mentioned, age was also analysed as a moderator as part of our analysis. We have also recorded their Prolific IDs to ensure completion and arrange payment.&#13;
&#13;
Settings of the study&#13;
After giving demographic information, participants were introduced to a small piece of information that gave them the context of this study. In normal audience conditions, participants were told that someone had sent them an ad about the flu vaccination, which refers to the manipulated message they will soon view. While for the intermediate audience, on top of the information that is revealed to the normal audience, they were exclusively told that they were a manager in a small town's paper company, which gives them the role of an intermediate audience (manager) who must evaluate the later presented message on behalf of other parties (employees) with themselves irrelevant to the gains and losses. &#13;
Material&#13;
We have chosen flu vaccination as our topic malady for the manipulation messages as COVID vaccines, as used in recent studies, are perhaps less relevant in what is generally thought of as the post-COVID era. Flu vaccinations, unlike many other vaccines, remain relevant to the major population and most age groups. To allow a closer resemblance to real-world settings and increase the generalisability of the results, we have made unofficial Facebook posts that claim to be from the NHS as the message format. Participants were informed that the graphics were not an actual Facebook post from the NHS but rather a material used solely for this study. See Figure 2 for an example, and appendix A for the complete set of stimuli presented to the participants in the study.&#13;
Audience condition. Figure 2 is the gain-framed version of the message from the normal audience condition. In normal audience conditions, the message communicates directly to the participants, stating the potential pros or cons for the participants when the participants decide to vaccinate or not vaccinate. In this condition, it is assumed that the participants evaluated the message on their behalf and nobody else's. While on the contrary, the intermediate audience condition communicates a slightly different message. The "you" in the message is replaced by "your employees". The purpose of this is to highlight that the participants evaluate this message as an intermediate audience (the manager), deciding whether they would recommend the vaccine to somebody else (the ‘ultimate audience’) given the outlined potential gains and losses, while the gains and losses remain irrelevant to the participants personally.&#13;
Message framing. The figure is a gain-framed message, and as mentioned, it follows the logical flow of "if you vaccinate, good things will happen". As we can see in Figure 2, if the recipient vaccinates, then according to the text, he/she would have a reduced chance of infection and a reduction in the duration and severity of the symptoms. The lost-framed version of the message follows the logical flow of "if you do not vaccinate, bad things will happen." So, in contrast to figure 2, the lost framed messages would say if the recipient does not vaccinate, he/she would have an increased chance of infection and increase in duration and severity of the symptoms. The two messages communicate the same reality and are logically equivalent. Hence, any differences between the groups can be attributed to the message framing.&#13;
Check questions.&#13;
After viewing the message, the participants were asked two questions regarding the ads content before moving on to later questions. The check questions were designed to be simple reading comprehension questions that check whether the participants attended to the message in the reading process. We have removed all responses failing to provide a correct answer in either one of the questions.&#13;
Behavioural intention&#13;
After viewing the framed messages, we have several Likert scale 7-point agree-disagree items used to measure the behavioural intention of the participants. However, given the audience condition differences and hence the potential differences in the decision-making process, behavioural intention for the two types of audience is defined differently. For the intermediate audience condition, behavioural intention is defined as "the intention to recommend/promote behaviour to the ultimate audience (employees)". While for the normal audience conditions, we measure their intention to get the vaccination for themselves. Both audience conditions responded to six items probing their behavioural intentions. In the normal audience condition, participants were asked how likely they would be to get the flu jab, how urgent they thought it is, and whether they would likely plan to get a flu jab after viewing the message. There are also items with reversed wordings asking whether they think getting a flu jab is NOT urgent. The intermediate audience was asked how likely they are to recommend the flu vaccine to their employees and how urgent and necessary they believe the vaccine is to their employees. (See the appendix for the complete set of questions.)&#13;
Emotional attachment&#13;
As mentioned, there are speculations revolving around the involvement of relational dynamics and relevant emotions in the intermediate audience. Therefore, we have arranged a set of questions probing the participant's emotional attachment towards the employee exclusively for the intermediate audience condition. There were four questions in total in this part of the study, which focused solely on the participants' sense of protection towards the employee, asking to what extent the participants thought that the vaccine was necessary for the employee's own good and well-being, and to what extent were the participants eager to protect them; an item with reversed wordings were also included. (See the appendix for the complete set of questions.)&#13;
Method of analysis &#13;
We analysed the data using the clm() and clmm() functions from the ordinal package in RStudio using R version 4.1.1. We first confirmed the main effects of message framing and audience conditions using clm(), and then we moved on to analyse the magnitude of random interacting effects of age, question type and individual differences. The reason for choosing cumulative link models (clm) was that the models were designed explicitly for ordinal variables like Likert scales, which predict the probability of each response level, unlike some metric models and prevent type 1 and type 2 errors resulting from forcing ordinal variables onto metric models (Liddell &amp; Kruschke, 2018). As for emotional attachment, given each item was probing quite a different emotion (e.g., sense of responsibility/ sense of protection), we have decided to fit a multivariate ordinal variable using the mvord() function to see if there is a significant difference in the multiple emotional outcomes under different audience condition, after which we investigated if any emotional attachment item was a significant predictor of behavioural intention using another clm model. We have also fitted clm() models including the interaction term between age and conditions predicting behavioural intention to see if age moderates the relationship between message framing and behavioural intention as proposed. Lastly, we have fitted a cumulative link mixed model (clm) to consider the role of potential sources of random effects such as participant differences and question differences in the analyses.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3955">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3956">
                <text>Data.csv </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3957">
                <text>Ko2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3958">
                <text>Hannah Clough</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3959">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3960">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3961">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3962">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3963">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3964">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3965">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3966">
                <text>200</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3967">
                <text>Regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="199" public="1" featured="0">
    <fileContainer>
      <file fileId="226">
        <src>https://www.johnntowse.com/LUSTRE/files/original/96376421108de2636d9e981cf41048d7.pdf</src>
        <authentication>207c5b9b4b7a6c1b355951f5e4cfe9e3</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3968">
                <text>What person attributes influence the comprehension of written health information? A scoping review and critical appraisal </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3969">
                <text>Charlotte Betts </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3970">
                <text>11/09/2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3971">
                <text>Increasingly, individuals are required to be actively involved in their healthcare. To do so successfully, individuals need to possess the skills and resources to be able to access, understand, and apply health information. Health communication guidance proposes that health information is not understood due to the mismatch between adults average literacy skills and the literacy skills required to comprehend health information. To tackle this, the use of plain language, such as shortening sentences and removing jargon, is promoted. Policies, however, do not commonly consider the impact of person attributes, such as age, education, and gender, on the comprehension of health information. To understand the nature and scope of current research, and whether person attributes do have an impact, a scoping review was conducted. The search strategy yielded 5,459 articles which were then screened, resulting in a final sample of 99 studies. Quantitative analyses and a critical appraisal revealed three main findings: (1) the research is heterogenous and evolving; (2) person attributes are not commonly used in analyses; and (3) when person attributes are included, the effects on comprehension vary. The findings and implications of this review have the potential to influence how future research is conducted, and crucially inform policies about the importance of person attributes on the comprehension of health information.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3972">
                <text>health literacy, comprehension, person attributes, health outcomes.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3973">
                <text>Stage 1: Identify the Research Question&#13;
The current research is an updated scoping review, building upon earlier work by Davies et al. (in preparation), which seeks to answer: What person attributes affect or can be predicted to affect the response of individuals to written health information?&#13;
&#13;
Table 2&#13;
Form Developer: Rebecca A. James&#13;
&#13;
Search strategy methods&#13;
Strategy&#13;
Method&#13;
Bibliographic &#13;
Searched the following: Cumulative Index to Nursing and Allied Health Literature (CINAHL); PsycINFO; PubMed; and Web of Science (WoS).&#13;
Journal &#13;
Obtained all sources from the following journals between 2018-11-08 and 2023-05-05: Patient Education and Counselling; Health Communication; and Journal of Health Communication.&#13;
Author&#13;
Obtained sources from the following authors between the dates 2018-11-08 and 2023-05-05: TC Davis; Dan Morrow; Chiung-Ju Liu; Michael Paasche-Orlow; Lisa Soederberg Miller; Rima Rudd; and Michael Wolf.&#13;
Reference&#13;
Once the full text of the bibliographic, journal, and author searches were complete, the reference lists of the included items were examined to locate new and possibly relevant articles.&#13;
&#13;
Stage 2: Identifying Relevant Studies&#13;
Sources were identified using four methods: (1) bibliographic search; (2) journal search; (3) author search; and (4) reference search. Grey literature was not searched due to concerns with the quality of the literature and possible time constraints. Excluding the reference search, all articles were published between 8th November 2018 to 5th May 2023. Details of each of the search methods are outlined in Table 2.&#13;
&#13;
Stage 3: Study Selection&#13;
Once articles were imported to Rayyan, a free online software application for conducting reviews (Ouzzani et al., 2016), duplicate articles were identified and removed. Then articles went through a title and abstract screening whereby articles which did not include the following were excluded: (1) a measure of understanding, comprehension, or readability; (2) a quantitative outcome; (3) populations who are typically developed; (4) presentation of health information; (5) present original data (excluding reviews); and (6) presented in English or English was a first language.&#13;
&#13;
The exclusion criteria (Table 3) enabled the final sample of studies to be focussed and relevant to the review. Included articles were then read in full and the same exclusion criteria was applied. Articles which passed the full-text screening were then examined to identify relevant studies from the reference lists, and these references then underwent the same screening process outlined above. Following best practice recommendations (Levac et al., 2010), study selection was conducted by myself and TM (a MSc student) to reduce the chance of bias. Further, regular training and meetings took place (with TM and supervisor RD) to become familiar with the process and to discuss and resolve conflicting decisions between researchers. &#13;
&#13;
&#13;
&#13;
&#13;
Table 3&#13;
Exclusion criteria for study selection&#13;
Exclusion Criteria&#13;
Reasoning&#13;
Not a measure(-s) of understanding, comprehension, or readability, metacomprehension, or recall&#13;
Articles which do not measure understanding, either directly, or indirectly, and do not measure readability of texts, are not relevant to the current review.&#13;
Not quantitative outcomes&#13;
Quantitative data is needed to understand average associations between the variation of person attributes and comprehension responses.&#13;
Not typical development (excluding participants presenting cognitive or language impairments)&#13;
Need to first understand how responses to health information varies within a typical population. Future research should be more inclusive to see how response varies in the whole population.&#13;
No presentation of health information.&#13;
The present review is concerned with comprehension and response to written health information.&#13;
Not original data (rather than reviews).&#13;
&#13;
Although reviews themselves are not targets for review, they will be identified as potentially informative.&#13;
Not English or second language speakers of English.&#13;
There is limited information regarding how comprehension responses to text may be different or similar in different language, further, text properties may differ.&#13;
&#13;
Stage 4: Data Charting&#13;
Articles were classified as being either an experimental, readability, or review article and as this paper focusses on research investigating the effects of person attributes, only experimental articles are analysed and reported. TM analysed and reported readability studies. Data extraction was completed so that information about the nature and characteristics of the study could be recorded. Data extraction was achieved by entering information (Table 4) into an online Qualtrics form which was developed and used by Davies et al. (in preparation) in their scoping review, which allowed for systematic extraction of information regarding the characteristics, methods, and findings of each study. To ensure that data extraction was reliable, a sample of studies were charted in parallel by myself and TM and were checked by RD for consistency. &#13;
&#13;
Table 4&#13;
Characteristics that will be extracted from experimental studies for data charting.&#13;
Form Developer: Rebecca A. James&#13;
&#13;
the article title&#13;
the article DOI, if available&#13;
the article authors&#13;
the article year of publication&#13;
the location of data collection (location may be inferred by author affiliation, or reported in article text concerning the regional or national source of health texts, or the locality of participant recruitment)&#13;
information about the composition of the participant sample (e.g., healthy adults, patients, etc.)&#13;
the number of participants&#13;
individual differences measures, if reported (e.g., gender, age, etc.)&#13;
text type (the type of the health information text sampled, e.g., website, medicine information, etc.)&#13;
text topic (the topic of the health texts sampled)&#13;
text sample size (the number of texts sampled)&#13;
if the study involved the manipulation of text properties, information on what linguistic or other features were manipulated, or what intervention was implemented (e.g., variation in organization or structure, in the inclusion of pictures, in readability, format, or other)&#13;
what test of comprehension was conducted (e.g., verbal or written summary, true/false question, open-ended questions, multiple-choice questions, cloze, recall, etc.)&#13;
what outcome measure was analysed (accuracy, or other)&#13;
&#13;
Stage 5: Collating, summarising and reporting the results&#13;
Data charting resulted in the creation of a database of detailed information about the nature and scope of each article. To effectively make sense of such information, the original database of information was organised using thematic labels (Table 5). For example, the thematic label leaflet would be applied to articles which referenced handouts of medical information as pamphlets, leaflets, and brochures. This process enabled greater ease and clarity to conduct quantitative analyses and to provide a textual commentary of the findings. Quantitative analyses include frequencies and distributions of study characteristics observed in the sample, in addition to evidencing what direction of effect person attributes had on responses to health information. Directionality of the results, as opposed to reporting significance is deemed appropriate as the reporting of significance in reviews is misleading (McKenzie &amp; Brennan, 2019). Following the synthesis and quantitative analyses, a critical appraisal of the evidence was conducted. Although this stage is optional for scoping reviews (Tricco et al., 2018; Levac et al., 2010), it was considered necessary to provide a sense-making of the conclusions we can reach given the synthesis of evidence. The appraisal followed guidance from the Synthesis Without Meta-Narrative (SWiM) guidance (Campbell et al., 2020) and Realist And Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) publication standards (Wong et al., 2013). Such guidance provides a framework &#13;
Form Developer: Rebecca A. James&#13;
&#13;
for the critical appraisal to comprehensively answer the research question, and discuss the traditions, trends, and value of research. Unlike other reviews such as systematic reviews, formal assessment tools such as the Cochrane Risk of Bias tools will not be used as this review does not focus on examining randomised control trials and the research is too heterogenous to appropriately apply such tools (Levac et al., 2010).&#13;
&#13;
Table 5&#13;
Thematic labels for experimental studies&#13;
location&#13;
text type (e.g., consent form, decision aid)&#13;
topic or health area (e.g., arthritis, cancer)&#13;
intervention (e.g., counselling, drug)&#13;
[study] design (e.g., illustration type, text readability)&#13;
[study] implementation (e.g., different data visualizations, different organisation)&#13;
outcome (e.g., comprehension, knowledge)&#13;
[outcome] measure (e.g., multiple choice question, self-rated)&#13;
individual differences (e.g., age, gender)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3974">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3975">
                <text>Data.csv and Text.doc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3976">
                <text>Betts2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3977">
                <text>Oliver Powell</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3978">
                <text>Unsure. Contact Dr. Rob Davies.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3979">
                <text>In part, in collaboration with TM. Supervised by Dr. Rob Davies&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3980">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3981">
                <text>Scoping Review</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3982">
                <text>LA1 4YW</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3983">
                <text>Dr. Rob Davies</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3984">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3985">
                <text>Scoping Review - Health Communication</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3986">
                <text>99 Studies</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3987">
                <text>Density plot and dot plot with critical appraisal</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="200" public="1" featured="0">
    <fileContainer>
      <file fileId="227">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e062f8b5eaffecab9990636ba589a6b1.pdf</src>
        <authentication>f34904e516c4c04821ec1e52402b3ea9</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3988">
                <text>Cerebral Lateralisation for Emotion Processing of Chimeric Faces in Individuals with Autism Spectrum Disorder </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3989">
                <text>Alexandra Crossley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3990">
                <text>5th September 2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3991">
                <text>Many studies have suggested that typical lateralisation for emotion processing tasks, such as facial emotion recognition, is lateralised to the right-hemisphere, with different emotions eliciting differing strengths of lateralisation (Bourne, 2010). However, there has been much debate as to the lateralisation of individuals with autism spectrum disorder (ASD) (Ashwin et al., 2005; Shamay-Tsoory et al., 2010). This study assessed the cerebral lateralisation of 30 adults with ASD, five children with ASD, 435 neurotypical adults and ten neurotypical children in a chimeric faces task, and aimed to identify whether the atypical lateralisation seen in children with ASD persists into adulthood (Taylor et al., 2012). Furthermore, the study aimed to identify whether lateralisation strength is affected by the emotion of the facial stimuli. No emotion- or age-related change in lateralisation was found, however, participants with ASD demonstrated a weaker right-hemispheric lateralisation compared to neurotypical participants. Therefore, this study supported the concept that individuals with ASD show atypical lateralisation which persists into adulthood, however, no evidence was found to support the concept that different emotions elicit different strengths of lateralisation.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3992">
                <text>autism spectrum disorder, cerebral lateralisation, emotion processing, adults, children, chimeric faces task</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3993">
                <text>Method&#13;
Participants&#13;
Data from a total of 481 participants with native level English proficiency (or age expected language development in children), normal or corrected-to-normal vision and no history of neurological disease or hearing loss were analysed for the current study (Table 1). Participants in the group ‘adults with ASD’ (N = 30; age: M = 30.17, SD = 9.85) were recruited through adverts on social media, through Prolific Academic (www.prolific.co), and through word of mouth. Participants in the groups ‘children with ASD’ (N = 5; age: M = 6.8, SD = 1.48) and ‘neurotypical children’ (N = 11; age: M = 7.0, SD = 1.90) were recruited through primary schools and word of mouth (Brooks, 2023), and parents of potential child participants were required to email a researcher to express their interest in participation. Participants in the group ‘neurotypical adults’ (N = 435; age: M = 29.44, SD = 8.03) were recruited through Prolific Academic (www.prolific.co) as part of a larger online behavioural laterality battery (Parker et al., 2021). Of the 481 participants who took part in the study, 32 were excluded during the data cleaning process (see Table 1 and Data Analysis for further information).&#13;
&#13;
Measures&#13;
As part of the study, a series of questionnaires were administered to collect information about the participants to ensure that individual differences could be accounted for. Participants were asked to complete the study and its associated questionnaires and tasks prior to beginning the main chimeric faces task, and were requested to use a desktop or laptop computer for the entirety of the study. For the ‘neurotypical children’ and ‘children with ASD’ groups, parents were asked to complete the questionnaires on behalf of the children and were asked to be present for the tasks, which were completed during a Microsoft Teams call with a researcher.&#13;
The study was completed online using the Gorilla Experiment Builder (www.gorilla.sc), a cloud-based tool for collecting data in the behavioural sciences.&#13;
&#13;
Demographic Questionnaire&#13;
The demographic questionnaire asked participants their age, gender, length of time in education (in years), language status, two questions assessing handedness (“Which is your dominant hand? / Which hand do you prefer to use for tasks such as writing, cutting, and catching a ball?”) and footedness (“Which foot do you normally use to step up on a ladder/step?”), and two eye dominance tests (Miles, 1929; Porac &amp; Coren, 1976). Participants were also asked whether they had a diagnosis of any developmental disorders, including ASD, dyslexia, attention deficit hyperactivity disorder or a language disorder (such as 'developmental language disorder' or 'specific language impairment'). For each diagnosis, participants had the option to answer “Yes”, “No”, or “Prefer not to say”, with the exception of ASD which also had the option to answer “No but I am self-diagnosed”. At this point, participants were sorted into their groups based on age (‘children’: five- to 11-years-old; or ‘adults’: 18- to 50-years-old) and ASD diagnosis (‘with ASD’, or ‘neurotypical’). Adults with a self-diagnosis of ASD were included in the ‘adults with ASD’ group.&#13;
&#13;
Edinburgh Handedness Inventory&#13;
The Edinburgh Handedness Inventory (EHI; Oldfield, 1971) was administered to provide a scaled score of handedness. Adult participants were asked to score ten daily tasks on a five-point Likert scale based on which hand they preferred to use during each task (“Left hand strongly preferred” = 2, “Left hand preferred” = 1, “No preference” = 0, “Right hand preferred” = 1, or “Right hand strongly preferred” = 2). These tasks included daily activities such as writing, brushing teeth, and opening a box. The EHI was scored by combining the direction and exclusiveness of the hand preference. Two totals were created: one of right-hand preference and one of left-hand preference. The difference was then found by subtracting the left-hand total from the right-hand total. This was then divided by the total score of both hand preference scores and multiplied by 100 (i.e., 100 x (right-hand total – left-hand total) / (right-hand total + left-hand total)). Final EHI scores ranged from -100 to +100, with positive scores indicating right-handedness, and negative scores indicating left-handedness. Child participants were not required to complete the EHI questionnaire.&#13;
&#13;
Lexical Test for Advanced Learners of English&#13;
A version of the Lexical Test for Advanced Learners of English (LexTALE; Lemhöfer &amp; Broersma, 2012) was provided to assess the participants’ level of proficiency in English. Within this, adult participants were shown 60 written stimuli comprised of English words and pseudowords (words that follow the orthographical and phonetic rules of the English language and are pronounceable but are otherwise nonsense words, e.g. ‘proom’) and asked to assess whether each word was an existing English word or not. Scores of the test were collected by averaging the percentages of correct answers for English words and pseudowords, with final scores ranging from 0-100. Child participants were not required to complete the LexTALE task.&#13;
&#13;
Autism-Spectrum Quotient (Short Version)&#13;
An abridged version of the Autism-Spectrum Quotient (AQ-Short; Hoekstra et al., 2011) was used to provide a measure of ASD traits. Participants with ASD were asked to rate 28 statements on a four-point Likert scale based on their level of agreement, with each answer accruing a different number of points (“Definitely agree” = 1, “Slightly agree” = 2, “Slightly disagree” = 3, or “Definitely disagree” = 4). On items in which “Definitely agree” represented a characteristic of ASD, the scoring was reversed. The scores for each question were totalled, with potential scores ranging between 28 (no ASD traits) to 112 (full inclusion of all ASD traits). Scores above 65 indicated ASD traits to a diagnosable degree. Neurotypical participants were not required to complete the AQ-Short questionnaire.&#13;
&#13;
Procedure Lateralisation for Facial Emotion Processing Task&#13;
A chimeric faces task was used to assess lateralisation for facial emotion processing.&#13;
Stimuli. The chimeric faces stimuli were created by Dr Michael Burt (Burt &amp; Perrett, 1997) and provided by Parker et al. (2021).&#13;
A collection of 16 different facial stimuli were created by merging two photographs of a man’s face depicting one of four emotions (‘happiness’, ‘sadness’, ‘anger’, or ‘disgust’) vertically down the centre of the face and blended at the midline (see Figure 1 for an example). Each emotion was paired either with itself, causing both hemifaces of the facial stimuli to match in emotion (a ‘same face’), or with a differing emotion, causing both hemifaces of the facial stimuli to be different (a ‘chimeric face’). Of the 16 stimuli, 12 were ‘chimeric face’ and four were ‘same face’.&#13;
Task. Each trial began with a fixation cross shown for 1000ms, followed by the face stimuli for 400ms. Participants then recorded which emotion they saw most strongly by clicking the corresponding button from a choice of the four emotions (Figure 2). For the children, emoticons were used instead of written words (Oleszkiewicz et al., 2017) (Figure 3). A response triggered the beginning of the next trial, with a time-out duration set at 10400ms after which the next trial was triggered automatically. Response choice and response times were recorded.&#13;
The task was split into four blocks of trials with a break between each block. Stimuli were presented in a random order and shown twice in each block, resulting in the participants being shown 32 stimuli per block and a total of 128 within the whole task.&#13;
&#13;
&#13;
Participants were familiarised with the stimuli at the start of the task, with the ‘same face’ stimuli being shown alongside a label explaining which emotion was being presented, to ensure they could recognise the emotions. A practice block was given at the start of the task to ensure participants knew how to complete the task, using the emotions ‘surprise’ and ‘fear’.&#13;
&#13;
Additional Measures&#13;
As data collection also included tasks for other studies, participants were also asked to complete a version of the Empathy Quotient – short (Wakabayashi et al., 2006), and undertake a dichotic listening task and its associated device checks (Parker et al., 2021). As these items were not part of the main study, participants were asked to complete these following the completion of the main study and its associated questionnaires and tasks, to ensure any findings from the study were not due to the additional measures.&#13;
&#13;
Laterality Index&#13;
A laterality index (LI) for each participant was calculated using the same method as Parker et al. (2021) by finding the difference between the number of times the participant chose the right-hemiface emotion and the left-hemiface emotion. This was then divided by the total number of times they chose either the right- or left-hemiface emotion, and multiplied by 100 (i.e., 100 x (right hemiface – left hemiface) / (right hemiface + left hemiface)). Scores ranged between -100 and +100, with a negative LI indicating a left-hemiface bias, and thus, a right-hemispheric dominance, and a positive LI showing the opposite.&#13;
&#13;
Data Analysis&#13;
Participants who scored less than 80 on the LexTALE task were removed as it was deemed their understanding of the English language was not strong enough and may cause issues with understanding the instructions (Parker et al., 2021). Furthermore, all trials with a response time faster than 200ms were removed as it was suggested that responses at this speed were too quick to have been based on the processing of the stimuli (Parker et al., 2021). In addition to this, outlier response times for each participant were removed using Hoaglin &amp; Iglewicz's (1987) procedure. Within this, outliers were any response times 1.65 times the difference between the first and third quartiles, below the first quartile or above the third (e.g., below Q1 – (1.65 x (Q3-Q1)), and above Q3 + (1.65 x (Q3-Q1))). Following the removal of all outlying trials, any participant with less than 80% of trials remaining were removed. In addition to this, participants who scored less than 75% on ‘same face’ trials (trials in which both hemifaces depicted the same emotion) were noted, because emotion processing is an area of difficulty for individuals with ASD. Within this, three participants in the ‘children with ASD’ group (60%), three participants in the 'neurotypical children’ group (27.27%), four participants in the ‘adults with ASD group (13.33%), and 30 participants in the ‘neurotypical adults’ group (7.41%) scored less than 75% on ‘same face’ trials, suggesting they had difficulties identifying the emotions.&#13;
To address the hypotheses, a linear model was performed using LI as the outcome and group (‘ASD’ or ‘neurotypical’), age (‘adult’ or ‘child’) and emotion (‘happy’ and ‘angry’, or ‘sad’ and ‘disgust’) as the predictors, including interactions between each predictor (Group x Age; Group x Emotion; Age x Emotion; and a three-way interaction, Group x Age x Emotion).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3994">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3995">
                <text>.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3996">
                <text>Crossley2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3997">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3998">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3999">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4000">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="4001">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4027">
                <text>Mshary Al Jaber</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="4002">
                <text>Margriet Groen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="4003">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="4004">
                <text>Developmental, Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="4005">
                <text>481 participants with native level English proficiency, 164 Male, 240 female and 1 other.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="4006">
                <text>Linear Mixed Effects Modelling and T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="201" public="1" featured="0">
    <fileContainer>
      <file fileId="228">
        <src>https://www.johnntowse.com/LUSTRE/files/original/ea2c0c6f1d9da3c754aeca4f45c6e344.pdf</src>
        <authentication>bcd96b51fb4c89cefd082eb9845b288a</authentication>
      </file>
    </fileContainer>
    <collection collectionId="9">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="499">
                  <text>Behavioural observations</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="500">
                  <text>Project focusing on observation of behaviours.&#13;
Includes infant habituation studies</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4007">
                <text>Investigating infant expectation on object search tasks.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4008">
                <text>Leah Murphy</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4009">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4010">
                <text>The current study aims to distinguish between Piaget’s (1954) theory of object understanding, highlighting the &#13;
role of object permanence on A not B task performance, and Diamond’s (1985) theory highlighting the role of &#13;
motor demands and lack of ability to inhibit habitual behaviours during the task. These two theories differ in &#13;
their predictions for the expectations of the infants taking part, with Piaget (1954) predicting that infants’ lack &#13;
of object permanence causes poor performance on the task and Diamond (1985) predicting that infants &#13;
understand the movement of objects and a lack of inhibition of habitual behaviours cause error in performance. &#13;
We tested 15 nine-month-old infants on a looking version of the A not B task. The use of impossible and possible &#13;
outcomes was also incorporated on B trials, with the object being revealed from either the correct or incorrect &#13;
location (e.g., see Ahmed &amp; Ruffman, 1998). Infant first look direction, accumulated looking time during trials &#13;
and the number of social looks initiated post-outcome, were used as measures. We found significant evidence &#13;
of  the ‘AB’ error during trials, with an significantly increased number of incorrect first looks on B trials. There &#13;
was also a descriptive pattern showing surprise at object location reveals with increased number of social looks &#13;
during B compared to A trials, though this was not significant. Accumulated looking analysis showed that infants &#13;
looked longer on A than B trials, suggesting that infants expected the object to be in location B on B trials, &#13;
demonstrating infants’ ability to understand objects and supporting Diamond’s (1985) theory. However, &#13;
implications for a small sample size and presence of individual differences on interpretation of looking time data &#13;
are discussed. Implications in theory and future research are suggested and overall, results provide support for &#13;
the application of Piaget’s (1954) theory and suggest that infants have limited object understanding based on &#13;
their displayed expectations during testing.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4011">
                <text>Infant, behaviours, theory</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="4012">
                <text>3.1. Participants &#13;
In this study, 15 participants took part, aged 8 months and 12 days to 9 months and 27 days old (M = 9 &#13;
months and 3 days, SD= 11.3 days). Six further infants were excluded from data analysis as they became too &#13;
fussy to complete the study. Participants were recruited from the Lancaster Baby lab database, along with the &#13;
Lancaster Baby lab Facebook page and were also recruited via word of mouth from guardians taking part in the &#13;
study.  &#13;
3.2. Materials &#13;
The video stimuli were created using Canva software (Canva.com, 2023) and was uploaded onto ‘Habit &#13;
2’ software (see Oakes et al., 2019) to display the stimuli during testing and to measure the accumulated looking &#13;
time of the infant participants. The stimuli involved a novel object obtained from the NOUN database (Horst &amp; &#13;
Hout, 2016). A camera was used to record the social looks exchanged between the infant and guardian, as well &#13;
as the direction of the infants’ first looks during testing.  &#13;
3.3. Design &#13;
This study had a within-subjects design, with all participants being exposed to the same experimental &#13;
conditions and the same stimuli. To counterbalance for location effects, half of the participants witnessed A &#13;
trials being hidden in the box on the left, whilst the other half witnessed the object being hidden in the box on &#13;
the right during A trials. The presentation of the accurate and inaccurate B trials was further counterbalanced &#13;
across participants, as half of the participants viewed the inaccurate B trials first, and the other half viewed the &#13;
accurate B trials first. &#13;
3.4. Ethical approval &#13;
Ethical approval for this study was granted by the departmental ethics committee (DEC) at Lancaster &#13;
University. Guardians were recruited via their preferred contact method and were sent the participant &#13;
information sheet to read before agreeing to take part in the study. A date and time of testing was arranged at &#13;
the Babylab building at Lancaster University, via telephone or email. Upon arrival, guardians were presented &#13;
with the consent form to sign and initial all points before being allowed to take part. They were also given the &#13;
opportunity to ask any questions  about the study and were informed that they could withdraw at any time. &#13;
After the study, the guardian received a five-pound contribution to travel costs, along with a free children’s book &#13;
for the infant, as a reward for taking part in the study. The guardian also received a debrief sheet to read and to &#13;
take home, providing them with all contact information of the lead researcher, if they wished to ask any &#13;
questions or to withdraw from the study.  &#13;
3.5. Procedure &#13;
The testing took place in a private room within the Whewell building at Lancaster University. The infant &#13;
and guardian were sat in front of a computer screen with the infant sat in a highchair positioned directly in front &#13;
of the screen, and the guardian sat in a chair to the side, slightly behind the infant (to allow researchers to see &#13;
clearly when the infant initiated a social look). The experimenter sat behind a divider at a computer, out of sight &#13;
of the infant and guardian. A social engagement video of the experimenter saying, “Let’s hide the blap, can you &#13;
find the blap?” was presented to the infants at the start of the experiment and between each trial, to insert &#13;
social communication and guide the attention of the infant to the screen before the stimuli were presented. The &#13;
infant then watched a series of video stimuli in which a novel object appeared on the screen and moved into &#13;
one of two boxes, both boxes were then covered (the object was hidden), and a there was a delay period of five &#13;
seconds (see figure 1). After the delay period, both boxes were revealed, and the location of the toy was visible &#13;
to the infant. Any movement of the object was accompanied by a sound to guide the attention of the infant to &#13;
the object, but this sound was not present when the object was revealed to avoid any leading factors when &#13;
measuring infant expectation. Instead, the occluders made a simple “whoosh” sound when they were removed, &#13;
to ensure the infant was paying attention. After five identical A trials, the object was then hidden in the second &#13;
location and the process was repeated consisting of six B trials. However, during the B trials, the object was &#13;
hidden in the second location, but was either revealed to be in the correct (accurate) or incorrect (inaccurate) &#13;
location (see figure 2). This variation in outcome was presented alternately to the infant, with the object being &#13;
revealed from the incorrect location for three out of the six B trials. The study lasted for approximately 10 &#13;
minutes per participant.  &#13;
Figure 1 &#13;
Example of A not B task stimuli presentation during A trials or accurate B trials.  &#13;
Figure 2 &#13;
Example of A not B task stimuli presentation during inaccurate B trials.  &#13;
3.6. Behavioural coding &#13;
Infant looking time was coded online as trial lengths were infant controlled. Each trial ended when the &#13;
infant looked away for four seconds. As this controlled the trial length, this was not double coded as this &#13;
inherently will lead to a high agreement level. For the coding of infant first look and number of social looks, the &#13;
videos recorded of the participants were saved and uploaded onto Microsoft OneDrive to be offline coded. First &#13;
look was defined as the direction that the infant first looked towards once the occluder was removed and the &#13;
object was revealed. On trials where the infant was not looking as the occluder was removed, the first look was &#13;
defined as the direction in which they looked once their gaze returned to the screen. The first look direction was &#13;
coded as correct and incorrect. The number of social looks initiated by the infant per trial was also measured &#13;
during coding, defined by the infant turning towards the guardian during each trial after an outcome was &#13;
revealed. Twenty percent of the videos were dual coded and there were no discrepancies between researchers &#13;
during the dual coding process for first looks (r = 1, p&lt;0.01) or social looking (r= 1, p&lt;0.01).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="4013">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4014">
                <text>Text/Word.doc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="4015">
                <text>Murphy2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4016">
                <text>Alicja Kowalska</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4017">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="4018">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4019">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4020">
                <text>Text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="4021">
                <text>LA1 4YW</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="4022">
                <text>Kirsty Dunn</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="4023">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="4024">
                <text>Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="4025">
                <text>15 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="4026">
                <text>Correlation</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="202" public="1" featured="0">
    <fileContainer>
      <file fileId="229">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2fc9eb768f4d00d92b5e73627b2912cf.docx</src>
        <authentication>2c3e89d0f82f7c2b4dd77fac20aa220e</authentication>
      </file>
      <file fileId="230">
        <src>https://www.johnntowse.com/LUSTRE/files/original/199d736584372c0beff6cff855b5aae8.xlsx</src>
        <authentication>0fba9d41dead25b6239c2151286388d8</authentication>
      </file>
      <file fileId="231">
        <src>https://www.johnntowse.com/LUSTRE/files/original/aa42a4e75948741e54f7972ce17998eb.xlsx</src>
        <authentication>75c7c9e87ba5477883053a77a5350982</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4028">
                <text>Is selfie-related behaviour motivated by sexual orientation and gender conformity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4029">
                <text>Wen Li</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4030">
                <text>2022-2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4031">
                <text>In the digital age, selfie culture has become an integral part of social media platforms. This globally widespread phenomenon created a distinctive form of self-expression, allowing selfie- makers to convey their identities, shape online personas, and build connections with others. Selfies are more than photos presented but also refer to a series of backstage to finally lead to creation and sharing. As research into selfies, gender differences in selfie-related behaviours have enabled further comprehension of selfies in terms of self-expression. Social Role Theory (SRT) explained the existence of gender differences in selfie culture that gender role norms and social expectations shape individuals' identity and behaviours. This study explored the concept of gender conformity among heterosexuals and non-heterosexuals and the impact on selfie-related behaviours. A total of 120 participants, categorized into heterosexual men, heterosexual women, non-heterosexual men, and non-heterosexual women, engaged in an online questionnaire, and contributed a total of 150 selfies. Data analysis involved one-way variance (ANOVA) to test the differences between the four groups, and multiple regression analysis to assess the influence of gender, the Traditional Masculinity-Femininity (TMF) scale and sexual attraction to me score. The results revealed no differences across the four groups in terms of the nine domains of selfie motives, as well as preoccupation. However, retention of moments and entertainment as the most prominent motives for selfies. For selfie behaviours, time spent on taking, editing, and selecting selfies, as well as taking amount and edit frequency differed significantly among the four groups. Specifically, both heterosexual women and non-heterosexual women tended to allocate more time on taking, editing, and selecting selfies for posting. Meanwhile, heterosexual women and non-heterosexual men displayed a higher trend for taking a greater number of selfies and editing selfies more frequently. These findings support the current studies indicating that women engage in selfie-related behaviours more actively than men, but more deeply that sexual orientation, especially the sexual attraction to men, also encourage some of selfie-related behaviours. While the results provide evidence for SRT as gender roles shape behaviours through socialisation, but also draw criticisms as TMF scale failed to predict the impacts on selfie-related behaviours and sexual orientation can break the traditional gender role expectations. Future research should keep exploring these relationships, offering deeper insights into gender conformity and gender non- conformity in the realm of self-presentation across diverse identity roles, thereby contributing to a more inclusive and diverse self-image narrative.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4032">
                <text>Selfie, Self-expression, Gender conformity, Sexual orientation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="4033">
                <text>Participants&#13;
This study recruited 120 normal adults as participants through a Qualtrics online questionnaire voluntarily and anonymously, of which 67 completed the questionnaire, including 22 men, 42 women and 3 self-identified as non-binary gender. However, since only binary gender was considered for the analysis of gender conformity, the three non-binary gender responses were removed. Additionally, in terms of sexual orientation, there were 48 heterosexuals, 6 homosexuals, 8 bisexuals and 2 others. The participants were divided into four sample groups, that is, 19 heterosexual men, 29 heterosexual women, three non-heterosexual men and 13 non-heterosexual women.&#13;
Materials&#13;
Sexual orientation and gender conformity were the two domains of prediction, and selfie- related behaviours were regarded as the outcomes. Sexual orientation was measured by self-rated sexual attraction to men and women, while gender conformity was measured by self-ascribed the Traditional masculinity/femininity (TMF) scale. As for selfie-related behaviour, it involves several aspects of selfie taking, editing, and posting. In addition, selfie motives and attributes of uploaded selfies were also be taken into account.&#13;
Sexual attraction score&#13;
Sexual orientation was self-identified by participants themselves as heterosexual, homosexual, bisexual, and others. The participants were asked to self-rated two statements about their sexual attraction to men and women. The latter, with some adjustment to Lippa's (2002) methodology, included two questions: "I am sexually attracted to men" and "I am sexually attracted to women". In both cases, participants were asked to self-rated on a scale of 0-10, where a higher score indicated greater sexual attraction to men or women. This measure transformed sexual orientation from a categorical variable to a continuous variable for further analyses of differences and relationships. In particular, the separate assessment of sexual attraction to men and sexual attraction to women could help to better detect whether sexual attraction to men is more influential in selfie-related behaviours.&#13;
Traditional Masculinity-Femininity scale&#13;
Gender conformity was measured as a continuous variable by using the Traditional Masculinity-Femininity (TMF) scale from Kachel et al. (2016). This scale comprises of six questions which are self-rated on a scale of 1-7, with 1 represented very feminine and 7 represented very masculine. Example items include "I consider myself as...", "Ideally, I would like to be..." regarded their preferred gender role. The remaining four questions concerned identified gender roles in terms of interests, attitudes and beliefs, behaviours, and appearance from a traditional perspective, being asked respectively as "Traditionally, my... would be considered as...". Then, the mean score of these six items would eventually be used as the individual’s masculinity/femininity score.&#13;
Kachel et al.(2016) pointed out that the TMF scale had been proven to be a reliable one- dimensional construct tool to assess masculinity because it correlates well with another gender- related instrument, the Bem Sex Role Inventory (BSRI) and successfully distinguishes between groups, such as females vs. males, lesbians/gays vs. heterosexual females and males,that are expected to be different (See Figure 2).&#13;
Figure 2&#13;
Mediation of the relation between BSRI and sexual orientation by the TMF (Kachel et al., 2016). Mean TMF scores separately for gender and sexual orientation (Kachel et al., 2016).&#13;
   &#13;
Selfie Coding&#13;
Participants were asked to upload 1-3 selfies that they would post on social media. All selfies were coded based on four aspects. First, participants were asked to whether their uploaded selfies had been edited or retouched. Then, the experimenter coded the number of people in the selfie (alone or in a group), the angle of the selfie (upward, horizontal, or downward), and the amount of body shown (face only, upper body, body without face, or whole body with face).&#13;
Meanwhile, a total of 150 portrait pictures were collected from the participants, although 17 of them looked like taken by others rather than selfies.&#13;
Selfie-related Behaviours&#13;
Selfie-related behaviours and motivations measures were taken from Bij de Vaate et al. (2018). For motivations, 33 items were used to reflect nine domains of motives, and each item was an agreement extent scale (1 = totally disagree to 5 = totally agree). The nine domains of motives included "Retention of moments" (e.g., "I make selfies to memorise a moment"), "Entertainment" (e.g., "Making selfies is enjoyable"), "Expressive information sharing" (e.g., "I tell others something about myself by using selfies"), "Social interaction" (e.g., "I make selfies to keep in touch with friends and family"), "Social use" (e.g., "I make selfies to show who I am and what I do"), "Habitual passing of time" (e.g., "Making selfies is a habit"), "Relaxation" (e.g., "Making selfies enables me to relax"), "Imaginary audience" (e.g., "I post selfies with a specific audience in mind"), and "Social pressure and identity" (e.g., "I make selfies because everybody does it"). Preoccupation (e.g., "I often share selfies") implied the degree to take part in selfie behaviours, was measured with six items by an agreement extent scale (1 = totally disagree to 5 = totally agree). See Appendix A for specific questions of items in the questionnaire.&#13;
Selfie-taking behaviour was measured by taking frequency, time spent and amount in the last three months, each of them was designed as an ordinal variable depending on an increasing degree. For instance, frequency referred to how often taking selfies, time spent referred to how long it taken within a selfie session, whereas amount referred to how many photos taken within a selfie session. Selfie-editing behaviour only accounted for two aspects, editing frequency and time spent, and selfie-posting behaviour used to select spend time instead of posting spend time.&#13;
Additionally, four items were designed to collect feedback on related concerns and feelings about selfies with a Likert scale that ranged from 0 (totally unconcerned) to 10 (totally concerned), including the attractiveness of their online image, the attention and comments of others on their selfies post on the social platform, the comparison with other people's selfies. Finally, 3 questions were designed to reflect participants’ satisfaction degree on their appearance in real life, before retouching and after retouching by the same 11-point Likert scale (0 = extremely uncomfortable to 10 extremely comfortable).&#13;
Procedure&#13;
This study had been reviewed and approved by a member of the Psychology department from the Lancaster University Board of Ethics. At the beginning of the survey, participants were provided with a participant information sheet, informing them that the study is about selfie- related behaviours in terms of sexual orientation and gender conformity. Anonymity and confidentiality were ensured because of sensitive information such as selfies and sexual orientation. After the confirmation of the consent, all participants complete the same questionnaire which is conducted on Qualtrics (www.qualtrics.com).&#13;
The survey questionnaire (See Appendix A) was designed to collect information through six key sections. In the first section, some demographic information was asked, such as age,&#13;
relationship status, gender identity, sexual orientation, and sexual attraction. Then, it was a six- item self-ascribed Traditional Masculinity/Femininity (TMF) scale. For the third section, participants were expected to upload three different selfies that would be posted on social media and state whether these selfies have been retouched. This section was optional and if a selfie were uploaded, a specific selfie consent would be required to confirm. The last three sections involved a series of questions on selfie-motives and preoccupation, selfie-related behaviours and feelings. At the end of the survey, participants were given a debrief sheet upon completion and were allowed the chance to ask any questions after the survey was undertaken. Meanwhile, information consent would be confirmed to get the final approval about all responses before submitting the questionnaire.&#13;
Analysis &#13;
Pre-Tests&#13;
Firstly, two pre-tests were conducted by one-way analyses of variance (ANOVA) to analyse differences in Traditional Masculinity/Femininity (TMF) score and sexual attraction to men score for each of the four sample groups, with corresponding post-hoc multiple comparison tests, to examine expected differences in TMF scale and sexual attraction to men score by the four sample groups.&#13;
Main Tests&#13;
Subsequently, selfie motives in nine domains, and preoccupation were examined for each of their differences across the four sample groups by ANOVAs with corresponding post-hoc multiple comparison tests.&#13;
In the context of selfie behaviours, we tested three relevant stages: selfie-taking, editing and posting. For each of these behaviours, we conducted ANOVAs with corresponding post-hoc&#13;
multiple comparison tests to assess differences across the four sample groups, and Multiple Linear Regression analyses to investigate the influences by self-identified gender, TMF score&#13;
and sexual attraction to men score. In particular, we delved into three aspects of taking selfies, involving frequency, time spent, and amount. Similarly, we analysed two aspects of selfie editing, frequency and time spent, as well as three aspects of selfie posting , which included frequency, time spent on selection, and amount.&#13;
In addition, in order to examine whether selfie content itself was affected by gender conformity and sexual orientation, a total of 150 uploaded selfies were coded according to four attributes: editing usage, selfie format, the shown part in the selfie, and the taking angle. Each of the attributes was firstly tested by a chi-square test to examine the association between attribute and the four sample groups because both were categorical variables. Furthermore, we conducted two separate ANOVAs, each followed by post-hoc multiple comparison tests. One used the TMF scores and the other used sexual attraction to men scores, with each of the four attribute of these selfies as independent variables to test for differences. In these analyses, the TMF scores and sexual attraction to men scores, as interval data, were regarded as dependent variables.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="4034">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4035">
                <text>.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="4036">
                <text>Li2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4037">
                <text>Mshary Al Jaber</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4038">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="4039">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4040">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4041">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="4042">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="4043">
                <text>Jaime Benjamin</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="4044">
                <text>Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="4045">
                <text>120</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="4046">
                <text>ANOVA, Chi-sqaured, Regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
    <tagContainer>
      <tag tagId="5">
        <name>gender coformity</name>
      </tag>
      <tag tagId="7">
        <name>self -xpression</name>
      </tag>
      <tag tagId="8">
        <name>Selfie</name>
      </tag>
      <tag tagId="6">
        <name>sexual orientation</name>
      </tag>
    </tagContainer>
  </item>
  <item itemId="203" public="1" featured="0">
    <fileContainer>
      <file fileId="234">
        <src>https://www.johnntowse.com/LUSTRE/files/original/012b92077ab1d153d79327092c115315.pdf</src>
        <authentication>992fc24af6f8d556815cd8fc13f48ca7</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4047">
                <text>Student Experiences of Mental Health Issues in Further and Higher Education. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4048">
                <text>Rachel Jordan </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4049">
                <text>17/09/2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4050">
                <text>Previous research has shown that students are at risk of experiencing mental health difficulties, specifically relating to anxiety, depression, and stress (Andrews, Hejdenberg, &amp; Wilding, 2006; Holland, 2016; Landow, 2006; Lattie, Lipson, &amp; Eisenberg, 2019; Nascante, 2001; Shankar &amp; Park, 2016). This study aimed to understand whether level of education, provisions to aid mental wellbeing within educational establishments, and students’ resilience were related to their mental wellbeing. A total of 94 participants were recruited for this study, however only 47 sets of data were complete enough to be used for the analyses. An online questionnaire using a series of demographic questions and subscales was used to collect data. No significant relationships were found between students’ mental wellbeing and their level of education or the provisions accessible to them in their place of education. However, a significant, negative correlation was found between students’ overall mental health and their resilience scores. Additional analyses were completed to better understand this and the same relationship was found between resilience and anxiety, depression, and stress. It was concluded that due to issues with power, more research with a larger sample is required to investigate these relationships further. It was also concluded that more understanding of resilience and mental health in students is required to be able to create better provisions.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4051">
                <text>Mental health, stress, achievement anxiety, depression, students, education, provisions, resilience.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="4052">
                <text>Participants&#13;
94 participants (25 males, 43 females) were used for this study, they were a minimum of 18 to over 51 years old and were from educational establishments around the UK. Participants were recruited using the SONA participant recruitment system through Lancaster University and also by advertising on Facebook and Instagram. All participants were treated in accordance with BPS ethical guidelines and Lancaster University Department of Psychology provided ethical approval for the study (Appendix A). Only data from 47 participants was used due to incomplete datasets.&#13;
&#13;
Design&#13;
This cross-sectional study used volunteers from the student population as participants as one sample group. This was a questionnaire-based study with four sub-scales using correlational analyses. The factors being analysed are detailed in the procedural section to follow.&#13;
Procedure&#13;
Adverts were placed on Facebook, and Instagram to help recruit participants to the study. Potential participants were provided with a link to the online survey, administered through Qualtrics. They were then provided with a participant information sheet (Appendix B) and gave informed consent (Appendix C) to participate on that basis. Informed consent was gained by participants selecting all six consent statements on the questionnaire. Following this, participants were presented with demographic questions and four measures (Appendix D). Following this, participants were presented with a debrief sheet (Appendix E) before being asked to close the tab. &#13;
Materials&#13;
Demographic Questions&#13;
The questionnaire started with two demographic questions. These were:&#13;
“How old are you in years?”  with the options of “18-21/22-25/26-29/30-35/36-40/41-45/46-50/51+/Prefer not to answer” and “What was your assigned sex at birth?” with the options of “Male/Female/Prefer not to answer”.&#13;
These items were included in this questionnaire to better understand the sample of participants included in the study.&#13;
&#13;
Level of Education&#13;
Participants’ level of education was measured using one multiple choice item.  This item was:&#13;
“What level of education are you in?”&#13;
The options for this multiple-choice item are “A Levels/ Apprenticeship/ Undergraduate Degree/ Postgraduate Degree/ PhD/ Other (please specify)/ Prefer not to answer”. This item was included to help investigate whether the level of participant’s education is related to their mental wellbeing.&#13;
Mental Wellbeing&#13;
Existing mental wellbeing was measured using two items which both used multiple choice options. These questions were:&#13;
“Have you ever been diagnosed with a mental health condition?” This item had options of “Yes/ No/ Prefer not to answer”. &#13;
“Please select if any of these diagnostic categories apply to your diagnoses.” This item was only included if the participant answered the previous question with “Yes”. The options for answering this item were “Anxiety Disorder/ Depression/ Eating Disorder/ Stress/ Psychosis/ Personality Disorder/ Other (please specify)/ Prefer not to answer.”&#13;
Measures of Support&#13;
Two questions were used in this questionnaire to decipher how supported students felt by their educational establishments. These questions were:&#13;
“How much support do you feel is available for your mental health at your place of education?” This question used a Likert scale ranging from one (lots) to four (I don’t know) and including an option of ‘prefer not to answer’. &#13;
This was followed by the open, qualitative question of “Please tell us about any mental wellbeing support you know is available in your place of education.” This question had an open response box, allowing participants to communicate their understanding of support available for their mental wellbeing in their educational institutions.&#13;
Perceived Stress Scale (PSS-10)&#13;
The Perceived Stress Scale (Andreou, et al., 2011; Cohen, Kamarck, &amp; Mermelstein, 1983; Cohen, Kamarck, &amp; Mermelstein, 1994; Reis, Hino, &amp; Añez, 2010; Roberti, Harrington, &amp; Storch, 2006) (Appendix F) using a five-point Likert scale ranging from 1 (never) to five (very often). This scale consisting of ten items was used to measure how stressed participants believed they were for this questionnaire. One example of the items used in this scale is:&#13;
“In the last month, how often have you felt that you were on top of things?”&#13;
The ten-item version of this measure was used in this study because research (Roberti, Harrington, &amp; Storch, 2006) generally commented that the ten-item scale was a reliable and valid measure of perceived stress when compared to the original, longer Perceived Stress Scale (Cohen, Kamarck, &amp; Mermelstein, 1983). Therefore, the PSS-10 was chosen for this questionnaire to reduce time demands on participants without compromising the reliability and validity of the measure. &#13;
&#13;
Adult Resilience Measure Revised (ARM-R)&#13;
The Adult Resilience Measure Revised (Resilience Research Centre, 2018; Jefferies, McGarrigle &amp; Ungar, 2018) (Appendix G) was used within the questionnaire to assess participants’ resilience skills. This sixteen-item measure was a five-point Likert scale ranging from one (not at all) to five (a lot) with an option of ‘prefer not to answer’ and consisting of seventeen items to measure resilience. An example of an item on this scale is:&#13;
“My friends stand by me during difficult times.”&#13;
Centre for Epidemiological Studies Depression Scale Revised (CESD-R)&#13;
The Centre for Epidemiological Studies Depression Scale Revised (Eaton, et al., 2004; Van Dam, &amp; Earleywine, 2011) (Appendix I) was used as a measure in this study to assess how depressed the participants felt. This measure used a four-point Likert scale with the first being rarely or none of the time to most or all of the time and an option of ‘prefer not to answer’ include. There were 20 items included in this scale in order to measure this factor, one such example of this is:&#13;
“I felt everything I did was an effort.”&#13;
Achievement Anxiety Test (AAT)&#13;
The Achievement Anxiety Test (Alpert &amp; Haber, 1960) (Appendix I) was used in this questionnaire to measure how anxious participants were about their ability to achieve. This measure used a five-point Likert scale with one meaning always and five meaning never, this scale consisted of nineteen items. An option of prefer not to answer was also provided. One example of an item on the scale is:&#13;
“I work most effectively under pressure, as when a task is very important.”&#13;
All measures in this questionnaire had an additional option of ‘prefer not to answer’ added to them for the purpose of this study to allow for forced choices to be selected for the questionnaire answers without removing the participants’ right to withdraw or withhold information.&#13;
Ethics&#13;
This study was conducted after ethical approval was received from the ethics committee of the Lancaster University psychology department on 12th June 2023.&#13;
One ethical issue that could come up in this study is that participants could believe that there is some diagnostic weight to the questionnaire. &#13;
Analyses&#13;
Descriptive statistics were taken for all variables and demographic data, specifically in regard to their mean and standard deviation.&#13;
Following this correlational analyses were then completed to determine whether there were relationships between mental wellbeing scores taken as a combination of the AAT, PSS-10 and the CESD-R subscales included in the questionnaires, resilience, preexisting mental health, provisions being accessed, and educational level.&#13;
If significant relationships are identified through the correlational analyses, regressions will be conducted to further investigate these relationships to identify whether they were causational.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="4053">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4054">
                <text>Data/excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="4055">
                <text>Jordan2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4056">
                <text>Megan Grace Liddell</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4057">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="4058">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4059">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="4060">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="4061">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="4062">
                <text>Dr. Chris Walton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="4063">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="4064">
                <text>Clinical </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="4065">
                <text>94</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="4066">
                <text>ANOVA, Correlation</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
