<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=14&amp;sort_field=added" accessDate="2026-05-03T14:53:04+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>14</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="186" public="1" featured="0">
    <fileContainer>
      <file fileId="205">
        <src>https://www.johnntowse.com/LUSTRE/files/original/42f25a4afae4681322de3eaca175d305.pdf</src>
        <authentication>f34904e516c4c04821ec1e52402b3ea9</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3707">
                <text>Cerebral Lateralisation for Emotion Processing of Chimeric Faces in Individuals with Autism Spectrum Disorder </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3708">
                <text>Alexandra Crossley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3709">
                <text>5th September 2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3710">
                <text>Many studies have suggested that typical lateralisation for emotion processing tasks, such as facial emotion recognition, is lateralised to the right-hemisphere, with different emotions eliciting differing strengths of lateralisation (Bourne, 2010). However, there has been much debate as to the lateralisation of individuals with autism spectrum disorder (ASD) (Ashwin et al., 2005; Shamay-Tsoory et al., 2010). This study assessed the cerebral lateralisation of 30 adults with ASD, five children with ASD, 435 neurotypical adults and ten neurotypical children in a chimeric faces task, and aimed to identify whether the atypical lateralisation seen in children with ASD persists into adulthood (Taylor et al., 2012). Furthermore, the study aimed to identify whether lateralisation strength is affected by the emotion of the facial stimuli. No emotion- or age-related change in lateralisation was found, however, participants with ASD demonstrated a weaker right-hemispheric lateralisation compared to neurotypical participants. Therefore, this study supported the concept that individuals with ASD show atypical lateralisation which persists into adulthood, however, no evidence was found to support the concept that different emotions elicit different strengths of lateralisation.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3711">
                <text>autism spectrum disorder, cerebral lateralisation, emotion processing, adults, children, chimeric faces task</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3712">
                <text>Method&#13;
Participants&#13;
Data from a total of 481 participants with native level English proficiency (or age expected language development in children), normal or corrected-to-normal vision and no history of neurological disease or hearing loss were analysed for the current study (Table 1). Participants in the group ‘adults with ASD’ (N = 30; age: M = 30.17, SD = 9.85) were recruited through adverts on social media, through Prolific Academic (www.prolific.co), and through word of mouth. Participants in the groups ‘children with ASD’ (N = 5; age: M = 6.8, SD = 1.48) and ‘neurotypical children’ (N = 11; age: M = 7.0, SD = 1.90) were recruited through primary schools and word of mouth (Brooks, 2023), and parents of potential child participants were required to &#13;
 &#13;
email a researcher to express their interest in participation. Participants in the group ‘neurotypical adults’ (N = 435; age: M = 29.44, SD = 8.03) were recruited through Prolific Academic (www.prolific.co) as part of a larger online behavioural laterality battery (Parker et al., 2021). Of the 481 participants who took part in the study, 32 were excluded during the data cleaning process (see Table 1 and Data Analysis for further information).&#13;
Measures&#13;
As part of the study, a series of questionnaires were administered to collect information about the participants to ensure that individual differences could be accounted for. Participants were asked to complete the study and its associated questionnaires and tasks prior to beginning the main chimeric faces task, and were requested to use a desktop or laptop computer for the entirety of the study. For the ‘neurotypical children’ and ‘children with ASD’ groups, parents were asked to complete the questionnaires on behalf of the children and were asked to be present for the tasks, which were completed during a Microsoft Teams call with a researcher.&#13;
The study was completed online using the Gorilla Experiment Builder (www.gorilla.sc), a cloud-based tool for collecting data in the behavioural sciences. &#13;
Demographic Questionnaire&#13;
The demographic questionnaire asked participants their age, gender, length of time in education (in years), language status, two questions assessing handedness (“Which is your dominant hand? / Which hand do you prefer to use for tasks such as writing, cutting, and catching a ball?”) and footedness (“Which foot do you normally use to step up on a ladder/step?”), and two eye dominance tests (Miles, 1929; Porac &amp; Coren, 1976). Participants were also asked whether they had a diagnosis of any developmental disorders, including ASD, dyslexia, attention deficit hyperactivity disorder or a language disorder (such as 'developmental language disorder' or 'specific language impairment'). For each diagnosis, participants had the option to answer “Yes”, “No”, or “Prefer not to say”, with the exception of ASD which also had the option to answer “No but I am self-diagnosed”. At this point, participants were sorted into their groups based on age (‘children’: five- to 11-years-old; or ‘adults’: 18- to 50-years-old) and ASD diagnosis (‘with ASD’, or ‘neurotypical’). Adults with a self-diagnosis of ASD were included in the ‘adults with ASD’ group.&#13;
Edinburgh Handedness Inventory&#13;
The Edinburgh Handedness Inventory (EHI; Oldfield, 1971) was administered to provide a scaled score of handedness. Adult participants were asked to score ten daily tasks on a five-point Likert scale based on which hand they preferred to use during each task (“Left hand strongly preferred” = 2, “Left hand preferred” = 1, “No preference” = 0, “Right hand preferred” = 1, or “Right hand strongly preferred” = 2). These tasks included daily activities such as writing, brushing teeth, and opening a box. The EHI was scored by combining the direction and exclusiveness of the hand preference. Two totals were created: one of right-hand preference and one of left-hand preference. The difference was then found by subtracting the left-hand total from the right-hand total. This was then divided by the total score of both hand preference scores and multiplied by 100 (i.e., 100 x (right-hand total – left-hand total) / (right-hand total + left-hand total)). Final EHI scores ranged from -100 to +100, with positive scores indicating right-handedness, and negative scores indicating left-handedness. Child participants were not required to complete the EHI questionnaire.&#13;
Lexical Test for Advanced Learners of English&#13;
A version of the Lexical Test for Advanced Learners of English (LexTALE; Lemhöfer &amp; Broersma, 2012) was provided to assess the participants’ level of proficiency in English. Within this, adult participants were shown 60 written stimuli comprised of English words and pseudowords (words that follow the orthographical and phonetic rules of the English language and are pronounceable but are otherwise nonsense words, e.g. ‘proom’) and asked to assess whether each word was an existing English word or not. Scores of the test were collected by averaging the percentages of correct answers for English words and pseudowords, with final scores ranging from 0-100. Child participants were not required to complete the LexTALE task.&#13;
Autism-Spectrum Quotient (Short Version)&#13;
An abridged version of the Autism-Spectrum Quotient (AQ-Short; Hoekstra et al., 2011) was used to provide a measure of ASD traits. Participants with ASD were asked to rate 28 statements on a four-point Likert scale based on their level of agreement, with each answer accruing a different number of points (“Definitely agree” = 1, “Slightly agree” = 2, “Slightly disagree” = 3, or “Definitely disagree” = 4). On items in which “Definitely agree” represented a characteristic of ASD, the scoring was reversed. The scores for each question were totalled, with potential scores ranging between 28 (no ASD traits) to 112 (full inclusion of all ASD traits). Scores above 65 indicated ASD traits to a diagnosable degree. Neurotypical participants were not required to complete the AQ-Short questionnaire.&#13;
Procedure&#13;
Lateralisation for Facial Emotion Processing Task&#13;
A chimeric faces task was used to assess lateralisation for facial emotion processing.&#13;
Stimuli. The chimeric faces stimuli were created by Dr Michael Burt (Burt &amp; Perrett, 1997) and provided by Parker et al. (2021).&#13;
A collection of 16 different facial stimuli were created by merging two photographs of a man’s face depicting one of four emotions (‘happiness’, ‘sadness’, ‘anger’, or ‘disgust’) vertically down the centre of the face and blended at the midline (see Figure 1 for an example). Each emotion was paired either with itself, causing both hemifaces of the facial stimuli to match in emotion (a ‘same face’), or with a differing emotion, causing both hemifaces of the facial stimuli to be different (a ‘chimeric face’). Of the 16 stimuli, 12 were ‘chimeric face’ and four were ‘same face’.&#13;
Task. Each trial began with a fixation cross shown for 1000ms, followed by the face stimuli for 400ms. Participants then recorded which emotion they saw most strongly by clicking the corresponding button from a choice of the four emotions (Figure 2). For the children, emoticons were used instead of written words (Oleszkiewicz et al., 2017) (Figure 3). A response triggered the beginning of the next trial, with a time-out duration set at 10400ms after which the next trial was triggered automatically. Response choice and response times were recorded. &#13;
The task was split into four blocks of trials with a break between each block. Stimuli were presented in a random order and shown twice in each block, resulting in the participants being shown 32 stimuli per block and a total of 128 within the whole task. &#13;
&#13;
   &#13;
Participants were familiarised with the stimuli at the start of the task, with the ‘same face’ stimuli being shown alongside a label explaining which emotion was being presented, to ensure they could recognise the emotions. A practice block was given at the start of the task to ensure participants knew how to complete the task, using the emotions ‘surprise’ and ‘fear’. &#13;
Additional Measures&#13;
As data collection also included tasks for other studies, participants were also asked to complete a version of the Empathy Quotient – short (Wakabayashi et al., 2006), and undertake a dichotic listening task and its associated device checks (Parker et al., 2021). As these items were not part of the main study, participants were asked to complete these following the completion of the main study and its associated questionnaires and tasks, to ensure any findings from the study were not due to the additional measures.&#13;
Laterality Index&#13;
A laterality index (LI) for each participant was calculated using the same method as Parker et al. (2021) by finding the difference between the number of times the participant chose the right-hemiface emotion and the left-hemiface emotion. This was then divided by the total number of times they chose either the right- or left-hemiface emotion, and multiplied by 100 (i.e., 100 x (right hemiface – left hemiface) / (right hemiface + left hemiface)). Scores ranged between -100 and +100, with a negative LI indicating a left-hemiface bias, and thus, a right-hemispheric dominance, and a positive LI showing the opposite.&#13;
Data Analysis&#13;
Participants who scored less than 80 on the LexTALE task were removed as it was deemed their understanding of the English language was not strong enough and may cause issues with understanding the instructions (Parker et al., 2021). Furthermore, all trials with a response time faster than 200ms were removed as it was suggested that responses at this speed were too quick to have been based on the processing of the stimuli (Parker et al., 2021). In addition to this, outlier response times for each participant were removed using Hoaglin &amp; Iglewicz's (1987) procedure. Within this, outliers were any response times 1.65 times the difference between the first and third quartiles, below the first quartile or above the third (e.g., below Q1 – (1.65 x (Q3-Q1)), and above Q3 + (1.65 x (Q3-Q1))). Following the removal of all outlying trials, any participant with less than 80% of trials remaining were removed. In addition to this, participants who scored less than 75% on ‘same face’ trials (trials in which both hemifaces depicted the same emotion) were noted, because emotion processing is an area of difficulty for individuals with ASD. Within this, three participants in the ‘children with ASD’ group (60%), three participants in the 'neurotypical children’ group (27.27%), four participants in the ‘adults with ASD group (13.33%), and 30 participants in the ‘neurotypical adults’ group (7.41%) scored less than 75% on ‘same face’ trials, suggesting they had difficulties identifying the emotions.&#13;
To address the hypotheses, a linear model was performed using LI as the outcome and group (‘ASD’ or ‘neurotypical’), age (‘adult’ or ‘child’) and emotion (‘happy’ and ‘angry’, or ‘sad’ and ‘disgust’) as the predictors, including interactions between each predictor (Group x Age; Group x Emotion; Age x Emotion; and a three-way interaction, Group x Age x Emotion).&#13;
&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3713">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3714">
                <text>.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3715">
                <text>Crossley2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3716">
                <text>Alexandra Haslam &#13;
Alexis McGuire&#13;
xue guo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3717">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3718">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3719">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3720">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3721">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3722">
                <text>Margriet Groen</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3723">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3724">
                <text>Developmental, Neuropsychology </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3725">
                <text>481 participants with native level English proficiency, 164 Male, 240 female and 1 other.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3726">
                <text>Linear Mixed Effects Modelling and T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="187" public="1" featured="0">
    <fileContainer>
      <file fileId="211">
        <src>https://www.johnntowse.com/LUSTRE/files/original/3f375427b3cd3cd552632ac865895843.pdf</src>
        <authentication>1414b72894a9a0b026784d7012d88fd3</authentication>
      </file>
    </fileContainer>
    <collection collectionId="3">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="181">
                  <text>EEG</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="182">
                  <text>Electroencephalography (EEG) is a method for monitoring electrical activity in the brain. It uses electrodes placed on or below the scalp to record activity with coarse spatial but high temporal resolution</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3732">
                <text>The Effect of Repetitive Headers on Acute Vestibular, Neural, Cognitive and Auditory Function in Football Players</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3733">
                <text>Jessica Andrew</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3734">
                <text>September 5th,2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3735">
                <text>The potential long-term consequences of repetitive sub-concussive head impacts, particularly from heading in football, have raised concerns about their association with neurodegenerative diseases in ex-professional football players. Recent research suggests that the accumulative nature of heading in football may lead to subtle brain changes, ultimately contributing to Chronic Traumatic Encephalopathy. This study aimed to investigate the immediate short-term effects of repeated headers in football on brain function. Seventeen football players completed a total of five high-force linear headers, one header every 2-minutes, imitating corner clearance headers, positioned 32 meters away from a ball launching machine. Four neurophysiological assessments were reported pre- and post-heading exercise: 1) vestibular evaluation for balance and sway changes, 2) neural assessment for resting brain activity changes, 3) cognitive tests measuring memory, attention and reaction time, 4) auditory assessment to assess any auditory processing changes. Paired-samples t-tests and Wilcoxon’s signed rank tests found no significant changes in pre-to-post heading exercise scores in any measurements. These findings warrant further investigation to determine whether the measures used were sensitive enough to detect subtle sub-concussive changes. Or, whether findings indicate a safe maximum number, specific to this type of header, has been established and this frequency does not pose any additional risks to footballers’ brain function. This study contributes to the ongoing research surrounding player safety in football and the immediate short-term effects of repetitive sub-concussive head impacts.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3736">
                <text>Repetitive Sub-concussion, Football Heading, Neurocognitive Performance</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3737">
                <text>Method&#13;
Participants&#13;
A power analysis for Analysis of Variance was conducted to determine the sample size needed for this study with an 80% power level, which identified a minimum of 40 participants to achieve a medium effect size of f=0.25, α=.001. This study did not collect a full sample and therefore is underpowered, as there are only a total of 17 participants (mean age=20.35). Participants were either academy players from Burnley Football Club or Lancaster University’s football team and were required to be male aged between 18 and 30- years with no history of concussion within the last month. This ensured variability between participants was minimal and excluding individuals with a recent history of concussion will mitigate potential confounding effects and isolate acute sub-concussive effects of heading, meaning this study will better attribute any observed effects to the specific act of heading rather than to prior injuries. Prior to volunteering, participants gave full consent and completed a modified version of the Physical Activity Readiness Questionnaire (PAR-Q), which is designed to measure participants readiness to participate in exercise or physical activity. See Appendix A for questionnaire. The purpose of the PAR-Q was to identify any potential underlying health concerns that may become an issue when participating. Additionally, participants completed a demographic questionnaire which was used to collect information about characteristics of the sample and highlighted whether participants had recently been concussed. See Appendix B for questionnaire. If any health concerns emerged during the completion of either questionnaire, participants were unable to continue with participation.&#13;
Materials&#13;
Participants were tested using a test battery comprised from four elements detailed below.&#13;
PROTXX.&#13;
Vestibular sway was measured using a wearable inertial measurement unit (IMU), called PROTXX. IMU is an electronic device designed to measures and report an individual’s orientation, velocity and gravitational forces (Powell et al., 2022). The IMU includes an accelerometer with three axis, X, Y and Z. The X-axis measures front-back acceleration, Y- axis measures vertical acceleration, and Z-axis measures left-right acceleration. For each of the three axes (x, y and z), during each 60 second test, data is recorded at a sampling rate of 100Hz and generates a total of 12,000 samples. Samples are filtered, meaning PROTXX eliminates gravitational bias and drift by using a high pass filter with a .04Hz cut-off frequency. An overall average is taken for each axis to compute one score for each of the four measures, 1) eyes open, 2) eyes closed 3) a ratio of the first two scores and 4) average power. It is also thought that the average power, calculated by adding the eyes closed and eyes open scores together, and divided by 2, can support a more objective way to clinically diagnose concussion, rather than the single tests alone (Ralston et al., 2020).&#13;
EEG Acquisition and Pre-Processing&#13;
Neural function was measured using EEG, Enobio 8 5G wireless device (Neuroelectrics, Cambridge, MA, USA). Participants wore a Neoprene headband to collect data from the frontal part of the head only, as this is where participants will later be instructed to header the ball. The Neoprene headband offers predefined positions for seven channels (F7, AF8. Fp1, Fpz, Fp2, AF8. F8) used to record EEG data and is based on the 10-10 international system (Jurcak et al., 2007). Figure 1 is a schematic of electrode location sites on the forehead. Participants wore an ear clip on their right ear with reference DRL/CMS electrodes. EEG data was initially visualised at a sampling rate of 500Hz and the line noise filter at 50Hz. Sticktrode pre-gelled self-adhesive electrodes were used and placed under the gaps of the Neoprene headband.The Necbox, is the core of the Enobio system, and is wirelessly connected to a laptop using NIC software (Neuroelectrics, Barcelona, Spain). Before any analysis, recorded EEG signals were coded and pre-processed in EEGLAB, a MATLAB toolbox (See Appendix C for EEGLAB Script) (Mathworks, Natick, MA, USA) (Delorme &amp; Makeig, 2004). This is to ensure that data is in a suitable format and quality for analysis is reliable. Signals were downsampled to 256Hz, re-referenced to the average of all channels, and two types of filtering were applied to EEG data, high-pass (0.1Hz) and low-pass (40Hz) filtering. Independent Component Analysis was then applied to the pre-processed EEG data using a threshold of 0.8. This step was added to identify and remove any eye blinks, heart and muscle artifacts with 80% certainty (Chang et al., 2020). Components that have a score between 0.8 and 1 for artifacts are flagged for potential rejection and removed from EEG data.&#13;
Neural activity pre-and post-heading exercise were analysed using power spectral density analysis (PSD). PSD analysis is a method used to analyse frequency components present in a signal. To conduct a PSD analysis, this study used the code spectopo() function within EEGLAB. The average power of EEG frequency bands was calculated for each of the seven electrodes used in this study. The frequency bands were separated in the following way: theta (4-8Hz), alpha (8-12Hz), beta (12-30Hz) and gamma (30-40Hz) (Harris &amp; Myers, 2023; Munia et al., 2017).&#13;
ImPACT Quick Test&#13;
ImPACT Quick Test measures different areas of cognitive function using five subtests that contribute to three overall composite scores used within this study’s analysis: Motor Speed, Memory, and Attention Tracker. The five subtests used to measure the participants cognitive abilities are:&#13;
1. Symbol Match – Reaction Time Subtest. The first subtest was a symbol match test which measured reaction time. Participants had to match a series of shapes with a specific number and the average time taken to complete all trials was recorded. (Figure 2a)&#13;
2. Symbol Match – Memory Subtest. This symbol match test also measured memory and asked participants to recall the number-symbol pairs and remember which symbol was matched up with which number. The resulting score is the percentage of correctly recalled number-symbol pairs across the trials. (Figure 2b)&#13;
3. Three Letter Memory – Speed Subtest. The participant is initially given three consonants. Participants are then given a computer-randomised 5x5 number grid and asked to count backwards from 25. The result is how long it takes the participant to count backwards from 25 to 1. This subtest provides a measure of speed, but also serves as an interference task for the next subtest. (Figure 2c)&#13;
4. Three Letter Memory – Memory Subtest. This subtest measures the participants memory and recall. It provided a measure of memory and tested how well the participants could recall the three consonants after completing the computer-randomised 5x5 number grid interference task. (Figure 2d)&#13;
5. Attention Tracker – Reaction Time and Attention Subtest. This subtest is comprised of three separate tasks and involves a circle that moves in the shape of a square, figure 8 and a sporadic/random pattern across the screen. The participant is asked to tap the circle when it changes from red to green at various points during its movement. This subtest provides results for reaction time and how fast the participant can react to the colour change and how well the participant can keep their attention sustained on the moving circle. (Figure 2e)&#13;
Digits in Noise Test (DiN)&#13;
The final testing measure used within this study was an online DiN test to measure participant’s auditory function. The DiN task is written in Javascript and hosted as a web- application on a Google Cloud Platform. Participants remained seated for this measure and listened to a British female voice who said three digits in a random order that are embedded into speech-shaped background noise (Smits et al., 2004). Stimuli was presented diotically in a quiet environment through supplied wired overhead SteelSeries 5Hv2 headphones. Signal- to-noise ratio (SNR) is a measure used to quantify strength of a desired signal relative to background noise level. A flexible approach called an adaptive 1-up, 1-down psychophysical method was employed. When a participant recalled the three digits correctly, SNR decreased, and when participants recalled the digits incorrectly, SNR increased. The DiN test began with a SNR of 0dB. As the test progressed, the changes in difficulty, known as step sizes, decreased from 5 to 2 dB after 3 reversals. Then after 3 more reversals, step sizes reduced even more to 0.5dB. A reversal refers to a change in direction, therefore the difficulty level is adjusted in the opposite direction. The test concluded after a total of 10 reversals and the final five SNR were recorded and an average was created, to calculate the participant’s speech in noise threshold. This threshold represents the level of background noise at which participants correctly identify the digits spoken to them. Football Heading&#13;
Within this study, participants received headers by a ball launching machine (Ball Launcher Pro Trainer, Ball Launcher). Participants completed five high-force linear headers at 35 yards from the ball launching machine at a ball speed of 50mph, the speed of the ball is regarded as below the average corner kick for collegiate-level players, which helps reduce the likelihood of injury and discomfort to players (Elbin et al., 2015; Tierney et al., 2021). This exercise is designed to mimic heading during football matches, specifically a clearance header from a corner (Figure 3). This ball launcher allowed for each of the headers to be consistent when measuring the effects of heading in football. The football used in this study was size 5, inflated to the FA standards of 8.6-15.6 PSI (The Football Association, 2023).&#13;
Procedure&#13;
A chronological schematic representation of the experimental procedure has been provided below (Figure 4).&#13;
Players at Burnley Football Club were contacted via their club’s representative and Lancaster University players were emailed directly. Upon arrival, participants were informed that the study will take around one hour to complete and asked to read the participant information sheet to ensure they fully understood the requirements before completing the consent, PAR-Q and demographic form. Participants height and weight was taken on the day, meaning that the demographic questionnaire will be filled in accurately. These forms were screened by the researcher(s) to ensure eligibility. Once completed, participants were first tested using PROTXX sensor. Participants were asked whether they experience any skin irritation or sensitivities due to prolonged adhesive contact, for example when using plasters. If there were no known adhesive-related reactions, PROTXX sensor was attached to the right mastoid using a disposable medical adhesive patch (figure 5). However, if participants did have adhesive-related reactions, PROTXX sensor was placed into a headband, and positioned in the same location (figure 6).&#13;
Participants were instructed to stand still, in an upright relaxed position with feet hips width apart and arms by their side whilst maintaining a straight, fixed gaze, three meters away from a specific target. Participants were instructed not to talk, chew gum, turn their head, fidget or move while the test is in progress. A smartphone app (protxxclinic; Version 1.0 build 13), connects to PROTXX via Bluetooth to run the tests and collect data. Participants completed two 60 second trials; eyes open and eyes closed. The app is used to start the test and participants are made aware of an audible countdown. One researcher stood by the participant to ensure no apprehension of falling during the eyes closed trial. The app sounded a tone signifying the test was 10-seconds away from finishing. Participants were instructed not to move until tests are completed and researchers had informed them, they can relax. If any anomalous participant movement was observed during the testing, said test data was excluded from analysis.&#13;
The second testing measure completed was EEG. Participants were seated for this measure and prior to setting up EEG, they were asked to wipe their foreheads with an alcohol wipe to reduce the impedance. Participants wore a Neoprene headband across their forehead with seven pre-gelled adhesive electrodes placed on bare skin located at each channel site and the reference channels were linked to their right ear (figure 7).&#13;
Electrode placement was completed, then connected via Bluetooth to a desktop app. The researcher(s) instructed participants to blink rapidly several times to create distinct electrical patterns on EEG recordings. This procedure is known as artifact-inducing task and is used to verify the quality of EEG readings (Grosselin et al., 2019). Participants were asked to sit in a comfortable position with eyes closed and 5-minutes of resting state EEG activity was recorded. A quiet environment was used, with minimal foot traffic, to reduce background noise and lessened potential of any auditory artifacts.&#13;
The third testing measure completed was ImPACT Quick Test. Participants remained seated for this measure and completed the assessment tool on an iPad in a quiet environment to remove distractions. The iPad was placed on a table in front of the participant who was instructed not to hold the iPad in their hands (Figure 8). The test was taken in one sitting and took participants between 5-7 minutes to complete.&#13;
The final testing measure participants completed was DiN. This measure required participants to remain seated in the quiet environment and wear provided overhead- headphones, that were plugged into the iPad (Figure 9). Before the test began, some music played through the headphones and participants were asked to find a volume level that was comfortable for them and were instructed to not change once selected. Participants were informed that this measure will vary in difficulty, and to guess the digits if they were unable to identify them. There was an opportunity to have a practice trial at this measure, so participants were familiar with the task and response procedure before the measure began. Participants would input three digits that they heard or guessed on the iPad’s keypad displayed. Again, this test was to be completed in one sitting and took no more than 3- minutes to complete.&#13;
After all baseline assessments were complete, participants moved on to the heading exercise, which was conducted in an indoor open space. The primary objective of this exercise was to execute five consecutive linear high-force headers within a timeframe of 10- minutes, giving participants 2-minutes rest between each header. Before commencing the heading exercise, participants received a briefing to prepare them. They were informed about their designated position, situated 35 yards away from the ball launching machine, replicating the distance of a typical corner kick in real-game scenarios. The ball would be launched at a velocity of 50mph from a ball launching machine, ensuring consistency. To optimise their heading technique, participants were encouraged to aim for frontal contact and direct the ball back in a linear trajectory towards the ball launching machine and were allowed to take a single step and execute a jump into the header (to replicate real-life situations). Additionally, a secondary researcher positioned further back from the participant was responsible for retrieving any missed headers, thereby sparing participants unnecessary energy expenditure. To familiarise participants with the dynamics and to help maximise their performance during this heading task, participants were acclimatised to the ball’s trajectory, observing several ball launches from the side-line and standing in their designed position before initiating any heading attempts. This also ensured that participants were comfortable with the ball speed.&#13;
Participants immediately completed the test battery again to obtain their post-heading scores, which were compared to evaluate the effect of headers on various test battery components. To close the study, participants were given a debrief sheet, and given a further opportunity to ask questions or raise concerns.&#13;
Statistical Analysis&#13;
Data pre- and post-heading were evaluated using paired-samples t-tests. The specific data used to input into the analyses was the independent variable, the point at which participants completed the test battery, pre-post heading exercise. The dependent variables&#13;
consisted of data collected from the different measures: PROTXX; using individual eyes open and closed condition sway power scores, in addition to ratio and average power of these conditions, EEG; PSD for the four frequency bands, (alpha, beta, theta and gamma) were averaged across each seven electrodes for each participant, ImPACT; overall composite scores for each cognitive domain (motor speed, memory and attention) and DIN; SNR thresholds. The paired-samples t-test is specifically designed to compare the means or averages of two related groups. These analyses test for immediate short-term effects that may occur after RSHI. Data was tested for normality using Shapiro-Wilks’ test (Shapiro &amp; Wilk, 1965). This step is crucial to verify whether the data meets parametric assumption of a normal distribution before proceeding with further analyses. Analyses were performed using statistical software R Studio. See Appendix D for R Studio Script.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3738">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3739">
                <text>Excel.csv("Linear Heading Study Data.xlsx")&#13;
r_file.R("Dissertation_Masters.R")</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3740">
                <text> Andrew2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3741">
                <text>Niko Liu ,Anusha Sandeep, David Racovita</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3742">
                <text>'Open'</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3743">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3744">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3745">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3746">
                <text>LA20PF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3787">
                <text>Dr Helen Nuttall</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3788">
                <text>Masters</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3789">
                <text>Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3790">
                <text>17 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3791">
                <text>T-Test&#13;
Other</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="188" public="1" featured="0">
    <fileContainer>
      <file fileId="221">
        <src>https://www.johnntowse.com/LUSTRE/files/original/02bb9218d0b3af78bfd7128818e52817.doc</src>
        <authentication>19a8aed24e888a51cf35142b9e4852b2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3747">
                <text>Prospect theory and intermediate audience: the effects of context on behavioural intention</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3748">
                <text>Wai Man Ko </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3749">
                <text>01/09/2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3750">
                <text>Prospect theory predicts how people react to gain or loss-framed outcomes in dilemma situations, where the potential consequence of the choice is framed as a gain (e.g., lives saved) or as a loss (lives lost). This gain-loss framing communication strategy, derived from the theory, has been applied in many contexts, from promoting the use of reusable coffee mugs to vaccination compliance, with loss-framed appeals being found generally to be more persuasive than gain-framed appeals in the context of promoting vaccination. The current study focused on exploring whether these well-established effects persist when an intermediate audience is exposed to gain/loss-framed messaging, using influenza (flu) vaccination intentionality as an outcome. Intermediate audiences refer to those who are evaluating the gains and losses from the message on behalf of someone else (the ultimate audience), while normal audiences are those making decisions on their own behalf. Two hundred participants were recruited for an online, between-subject study, in which participants were split into two audience conditions and within which they were further split to view a gain-framed or a loss-framed message. Their subsequent behavioural intentions were measured as the outcome, with age as a potential moderating factor (and emotional attachment as a potential mediator exclusively for the intermediate audience condition). Results indicate that neither age nor emotional attachment are significant moderators or mediators. Loss-framed appeal enjoyed a persuasive advantage over the gain-framed appeal only in the intermediate audience condition. Possible interpretations of results, along with potential further directions of research, are discussed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3751">
                <text>Prospect theory, gain/loss framing, intermediate audience, communication research, health communication, vaccination</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3752">
                <text>To test the outlined hypotheses, our current study took the form of an online Qualtrics questionnaire (see appendix B for questions) where the questionnaire would introduce participants to one of the audience conditions and view the appropriate version of the manipulated message before moving on to answering some items measuring their behavioural intention and emotional attachment. The study has a 2 (intermediate/normal audience condition) X 2 (gain/loss-framed appeal) design with emotional attachment as a potential mediating variable for the intermediate audience condition and behavioural intention as the outcome variable for all audience conditions. &#13;
Participants&#13;
We recruited 200 healthy adults based in the UK on Prolific, an online research participant recruitment platform. Participants have provided consent and completed the study remotely with their personal devices. Their unique Prolific ID was used in this study as the only identifier, which cannot be traced back to them personally. Participants were compensated monetarily for their participation.&#13;
We randomly assigned our participants to one of the four audience conditions with 50 participants each: the normal gain-framed condition, the normal loss-framed condition, the intermediate gain-framed condition, and the intermediate loss-framed condition.&#13;
Questionnaire design&#13;
Consent&#13;
The participant gave consent to participate in the study with the Qualtrics consent element so that participants can check a box for each item. There were seven items that the participants had to check one by one before commencing the study. Responses which failed to provide a full response in the consent item would be removed from the study.&#13;
Demographics&#13;
For demographics, we have recorded the participants' age and gender for the records. As mentioned, age was also analysed as a moderator as part of our analysis. We have also recorded their Prolific IDs to ensure completion and arrange payment.&#13;
Settings of the study&#13;
After giving demographic information, participants were introduced to a small piece of information that gave them the context of this study. In normal audience conditions, participants were told that someone had sent them an ad about the flu vaccination, which refers to the manipulated message they will soon view. While for the intermediate audience, on top of the information that is revealed to the normal audience, they were exclusively told that they were a manager in a small town's paper company, which gives them the role of an intermediate audience (manager) who must evaluate the later presented message on behalf of other parties (employees) with themselves irrelevant to the gains and losses. &#13;
Material&#13;
We have chosen flu vaccination as our topic malady for the manipulation messages as COVID vaccines, as used in recent studies, are perhaps less relevant in what is generally thought of as the post-COVID era. Flu vaccinations, unlike many other vaccines, remain relevant to the major population and most age groups. To allow a closer resemblance to real-world settings and increase the generalisability of the results, we have made unofficial Facebook posts that claim to be from the NHS as the message format. Participants were informed that the graphics were not an actual Facebook post from the NHS but rather a material used solely for this study. See Figure 2 for an example, and appendix A for the complete set of stimuli presented to the participants in the study.&#13;
Audience condition. Figure 2 is the gain-framed version of the message from the normal audience condition. In normal audience conditions, the message communicates directly to the participants, stating the potential pros or cons for the participants when the participants decide to vaccinate or not vaccinate. In this condition, it is assumed that the participants evaluated the message on their behalf and nobody else's. While on the contrary, the intermediate audience condition communicates a slightly different message. The "you" in the message is replaced by "your employees". The purpose of this is to highlight that the participants evaluate this message as an intermediate audience (the manager), deciding whether they would recommend the vaccine to somebody else (the ‘ultimate audience’) given the outlined potential gains and losses, while the gains and losses remain irrelevant to the participants personally.&#13;
Message framing. The figure is a gain-framed message, and as mentioned, it follows the logical flow of "if you vaccinate, good things will happen". As we can see in Figure 2, if the recipient vaccinates, then according to the text, he/she would have a reduced chance of infection and a reduction in the duration and severity of the symptoms. The lost-framed version of the message follows the logical flow of "if you do not vaccinate, bad things will happen." So, in contrast to figure 2, the lost framed messages would say if the recipient does not vaccinate, he/she would have an increased chance of infection and increase in duration and severity of the symptoms. The two messages communicate the same reality and are logically equivalent. Hence, any differences between the groups can be attributed to the message framing.&#13;
Check questions.&#13;
After viewing the message, the participants were asked two questions regarding the ads content before moving on to later questions. The check questions were designed to be simple reading comprehension questions that check whether the participants attended to the message in the reading process. We have removed all responses failing to provide a correct answer in either one of the questions.&#13;
Behavioural intention&#13;
After viewing the framed messages, we have several Likert scale 7-point agree-disagree items used to measure the behavioural intention of the participants. However, given the audience condition differences and hence the potential differences in the decision-making process, behavioural intention for the two types of audience is defined differently. For the intermediate audience condition, behavioural intention is defined as "the intention to recommend/promote behaviour to the ultimate audience (employees)". While for the normal audience conditions, we measure their intention to get the vaccination for themselves. Both audience conditions responded to six items probing their behavioural intentions. In the normal audience condition, participants were asked how likely they would be to get the flu jab, how urgent they thought it is, and whether they would likely plan to get a flu jab after viewing the message. There are also items with reversed wordings asking whether they think getting a flu jab is NOT urgent. The intermediate audience was asked how likely they are to recommend the flu vaccine to their employees and how urgent and necessary they believe the vaccine is to their employees. (See the appendix for the complete set of questions.)&#13;
Emotional attachment&#13;
As mentioned, there are speculations revolving around the involvement of relational dynamics and relevant emotions in the intermediate audience. Therefore, we have arranged a set of questions probing the participant's emotional attachment towards the employee exclusively for the intermediate audience condition. There were four questions in total in this part of the study, which focused solely on the participants' sense of protection towards the employee, asking to what extent the participants thought that the vaccine was necessary for the employee's own good and well-being, and to what extent were the participants eager to protect them; an item with reversed wordings were also included. (See the appendix for the complete set of questions.)&#13;
Method of analysis &#13;
We analysed the data using the clm() and clmm() functions from the ordinal package in RStudio using R version 4.1.1. We first confirmed the main effects of message framing and audience conditions using clm(), and then we moved on to analyse the magnitude of random interacting effects of age, question type and individual differences. The reason for choosing cumulative link models (clm) was that the models were designed explicitly for ordinal variables like Likert scales, which predict the probability of each response level, unlike some metric models and prevent type 1 and type 2 errors resulting from forcing ordinal variables onto metric models (Liddell &amp; Kruschke, 2018). As for emotional attachment, given each item was probing quite a different emotion (e.g., sense of responsibility/ sense of protection), we have decided to fit a multivariate ordinal variable using the mvord() function to see if there is a significant difference in the multiple emotional outcomes under different audience condition, after which we investigated if any emotional attachment item was a significant predictor of behavioural intention using another clm model. We have also fitted clm() models including the interaction term between age and conditions predicting behavioural intention to see if age moderates the relationship between message framing and behavioural intention as proposed. Lastly, we have fitted a cumulative link mixed model (clm) to consider the role of potential sources of random effects such as participant differences and question differences in the analyses.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3753">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3754">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R&#13;
Text/Word.doc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3755">
                <text>Ko2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3756">
                <text>Eleanor Little, Alicia Turner, Laurie Dixon</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3757">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3758">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3759">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3760">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3761">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3762">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3763">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3764">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3765">
                <text> 185 participants (124 females, 58 males, 2 non-binary, and 1 undisclosed)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3766">
                <text>Regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="189" public="1" featured="0">
    <collection collectionId="8">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="191">
                  <text>Ratings</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="192">
                  <text>Studies where participants make a series of ratings or judgements when presented with stimuli</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3767">
                <text>Does Noise Affect How Children Learn Grammar in the Classroom?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3768">
                <text>Ashlynn Mayo</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3769">
                <text>Academic year: 22-23</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3770">
                <text>In a classroom environment noise can be a significant impediment, obstructing and distorting essential information being taught. Extensive prior research consistently indicates that noise has a detrimental impact on learning, those who learn in noise retain and comprehend far less information than their counterparts who learn in quiet. To date there are no studies that investigate the effect of noise on learning grammar specifically -the primary aim of the current study is the address this research gap. This paper details our recruitment of 16 children aged 7– 12 through the Babylab database at Lancaster university. This study employed a between participants design, where children completed a three-part audio evaluation, engaged in an artificial grammar paradigm, and a undertook a working memory task. The artificial grammar paradigm was employed as our primary assessment tool, participants were exposed to the grammar either in noise or in quiet. Results were analysed using a multiple regression with total grammar score as the dependent variable and age, gender, condition, and working memory as the independent variables. In contrast the prior research, our results revealed that the effect of the independent variables on the dependent variable was statistically nonsignificant, proving our null hypotheses to be true. These findings suggest that background noise does not affect how children learn grammar in the classroom challenging the existing understanding that noise negatively impacts learning.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3771">
                <text>Developmental, regression</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3772">
                <text>Participants&#13;
16 children aged 7-12 years old participated in this study, unfortunately due to technical&#13;
issues 5 participants’ data were excluded leaving 11 children’s data to be included in the&#13;
analysis (M=8.64, SD=1.63, female=7, male=4). Children were recruited through the Lancaster&#13;
University Babylab database and by flyers posted on social media and local community.&#13;
A requirement of the current study was that children be English speaking monolinguals,&#13;
this is because an abundance of research has indicated that those who can speak two or more&#13;
languages are at a far greater advantage when it comes to new language acquisition (Antoniou&#13;
et al., 2015). Therefore, in order to control the likelihood of extraneous variables such as this&#13;
we ensured all participants were English speaking monolinguals only.&#13;
Furthermore, children were also required to have normal vision or corrected to normal&#13;
vision. To rule out hearing loss all children had to pass an otoscope inspection, a&#13;
tympanometry test, and a pure tone hearing screening at 20dB in the standard frequencies&#13;
(250Hz-8kHZ).&#13;
The current study employed a between participant design whereby subjects were&#13;
allocated to a condition based on their age and gender -age was categorised into 7-9 and 10-12-&#13;
in order to ensure that there were as equal an amount of males and females in each condition&#13;
over all ages. It is crucial for the validity of the study that children are only exposed to the&#13;
artificial grammar paradigm once or data will be rendered unreliable as they will have an unfair&#13;
advantage over the other participants.&#13;
Ethics for the current study have been obtained from the Departmental Ethics&#13;
Committee (DEC), Psychology Department at Lancaster University.&#13;
Materials&#13;
This study was conducted within a double walled soundproof chamber at Lancaster&#13;
University’s PELiCAN lab where the participant sat at a desk with a monitor placed in front of&#13;
them. A secondary researcher was present in the lab for health and safety purposes.&#13;
Consent and assent forms, a background questionnaire on the child’s hearing, audio&#13;
evaluation results, and task data were all recorded on REDCap (Harris et al., 2009; Harris et&#13;
al., 2019): a GDPR compliant application for data capture.&#13;
Travel compensation was provided: £5 within 40 minutes and £10 for over 40 minutes.&#13;
Furthermore, children received a certificate and book of their choosing from the PELiCAN lab.&#13;
The audio evaluation&#13;
This study was comprised of three sections: an audio evaluation whereby an otoscope&#13;
examination, tympanometry test, and audiogram using Affinity Suite were conducted. During&#13;
the audiogram participants wore headphones and had a handheld button that they pressed when&#13;
they heard the pure tone sounds.&#13;
The Artificial Grammar Paradigm&#13;
After passing the hearing evaluation the children completed an artificial grammar&#13;
paradigm previously used by Torkildsen et al. (2013) consisting of two grammatical forms: aX&#13;
and Yb. The paradigm was presented in the form of an alien game whereby the children helped&#13;
an alien learn a new language. We presented the paradigm in this format in order to increase&#13;
engagement; children are motivated by the colourful and curious nature of a game (Blumberg&#13;
et al., 2019) and therefore we are far more likely to obtain more data (less drop outs due to&#13;
fatigue and boredom). This task was created in PsychoPy and hosted by Pavlovia.&#13;
The background noise&#13;
In order to imitate the background noise of a classroom speech shaped noise (SSN)&#13;
(e.g. Leibold et al., 2013) was emitted through a speaker on the back wall of the booth behind&#13;
the child. The background noise speaker was 180 degrees on the azimuth, and the target&#13;
speaker was 0 degrees on the azimuth. Background stimuli was calibrated so that for the quiet&#13;
condition the stimulus was emitted at 35dB and for the noisy condition it was played at 65dB.&#13;
The n-back Test of Working Memory&#13;
Lastly, we conducted the 1-back test of working memory (Owen et al., 2005) which&#13;
was also created on PsychoPy and hosted by Pavlovia&#13;
Procedure&#13;
Prior to the commencement of the study guardians gave informed consent (See&#13;
Appendix C), if the child was 11 or older they gave informed assent in addition to this (See&#13;
Appendix D). Guardians were then asked to complete a short background questionnaire&#13;
pertaining to their child’s hearing (See Appendix H). Whilst they completed these forms the&#13;
researcher began the study inside the booth; using Affinity suite it was ensured that the&#13;
microphone inside the booth was turned on in order for the guardian to be able to hear what&#13;
was going on inside the booth by using the headphones places outside the booth.&#13;
As aforementioned, the audio evaluation consisted of three tests, these were&#13;
administered in the booth by the researcher and took up to 15 minutes. Firstly, an ear&#13;
inspection was conducted using an otoscope, participants were required to have clear ears free&#13;
of perforations and/or any infection. Secondly, a tympanometry test was conducted whereby&#13;
participants must have passed with type A (normal) results. Lastly a pure tone hearing&#13;
screening was conducted at 20dB in the standard frequencies (250Hz-8kHZ). The researcher&#13;
left the booth for the audiogram in order to run the program on the desktop outside the booth&#13;
while the child remained inside the booth.&#13;
The task consisted of 11 blocks comprised of 4 exposure items and 2 test items, before&#13;
the test portion children were exposed to 4 examples of what is expected of them, they had to&#13;
get these right in order for the software to move onto the test phase. If children did not get&#13;
these right the researcher explained and promoted them to pick the correct answer. Children&#13;
were required to press ‘x’ on the keyboard for right and ‘n’ on the keyboard for wrong, answers&#13;
were saved and recorded automatically on Pavlovia. The software was run by the researcher&#13;
from outside the booth and was mirrored onto the desktop inside the booth.&#13;
Lastly, we conducted the 1-back test of working memory (Owen et al., 2005), where&#13;
children were exposed to a number of animal sounds and were required to record weather the&#13;
stimuli was a new sound or one they had heard before, ‘x’ represented repeated sound and ‘n’&#13;
represented a new sound, participants had to ensure they made a button press after each noise.&#13;
Once all tasks were completed the researcher collected the child from inside the booth&#13;
and a short verbal and written debrief was given to the child and guardian. Guardians were&#13;
given and signed for their travel compensation, and children received a certificate from the&#13;
PELiCAN lab and were able to choose a book of their liking. Participants were walked back to&#13;
their car or bus to bring a close to the visit.&#13;
Analysis&#13;
In order to answer our research questions we will carry out a multiple linear regression&#13;
using IBM SPSS Statistics (version 28). We will be employing a between participants design&#13;
where we will examine the effect of background noise (noisy and quiet) on total grammar&#13;
score. Our additional independent variables will be working memory, gender and age. If we&#13;
find a statistically significant result with regard to grammar score then we will be conducting a&#13;
post hoc test on grammar score breaking them down into aX and Yb in order to determine the&#13;
difference between the two types of grammar.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3773">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3774">
                <text>Data/Excel.xls</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3775">
                <text>Mayo2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3776">
                <text>Chloe Massey, Molly Pugh, Chloe Kitis</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3777">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3778">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3779">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3780">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3781">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3782">
                <text>Hannah Stewart</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3783">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3784">
                <text>Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3785">
                <text>11 participants (7 Female, 4 Male)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3786">
                <text>Regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="190" public="1" featured="0">
    <fileContainer>
      <file fileId="208">
        <src>https://www.johnntowse.com/LUSTRE/files/original/986ca14e7163ef0ec031b820f41202ef.pdf</src>
        <authentication>15ac31078692a6a822b1e06dfab1c670</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3792">
                <text>Inner Speech and Its Role in Purchasing Decision-Making Process: Analysis of Within-Subjects Experiment and Questionnaires</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3793">
                <text>Han-Yi Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3794">
                <text>2022-23</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3795">
                <text>Inner speech is a cognitive function related to language processes. Based on its functions reflecting information processing and memorising, it may link to the purchasing process, which includes searching and evaluating product information. Inner speech may also help people think and imagine using the product in the future during their purchasing process.&#13;
This study discussed and investigated the role of inner speech in the purchasing process and how it might affect the decision-making time. This study also mentioned how inner speech may be identified and suppressed. Participants’ data was collected through experiments and several questionnaires. The findings indicated that inner speech might help people in Information Search and Alternative evaluation and affect decision time. The findings also suggested what people may consider and how they use inner speech. &#13;
By uncovering the potential relationship between the purchasing process and inner speech, this research provided valuable information for marketing and psychology research fields. It gave companies some suggestions for practical use, reflecting how people may use inner speech during the purchasing process.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3796">
                <text> inner speech, purchasing behaviour, memory, decision-making.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3797">
                <text>Methods Section:&#13;
This study was approved by ethics committees at Lancaster University. There were no ethical issues for researchers managing the personal information. The participants’ information remained anonymous and were assigned subject ID (P01, P02, P03…, P30 in Experiment 1 and PCT01, PCT02, PCT03…, PCT30 in Experiment 2). All data were stored anonymously with no identifiable information. &#13;
Participants were given the Participant Information Sheet (PIS) before participating in the experiments. On the day of testing, they asked any questions they might have, then consented to attend the experiment in person or via online platforms like Microsoft Teams, Zoom, or Google Meet to ensure that the suppression was active when needed. The experiment took approximately 30 minutes, including answering all questionnaires. The experiment was held in the participant’s home or a place where no one spoke so that the participant would not be disturbed by any chance.&#13;
Experiment 1&#13;
Participants&#13;
G*power suggested 52 participants within groups using t-tests and multiple mixed linear regression models, with a .4 effect size and .05 (5%) a-error probability in 80% power (1-b error of probability) (Brysbaert, 2019). Thirty participants were recruited in this experiment with no record or history of neurophysiological disorders, such as dyslexia or aphasia, to ensure that no conditions influence the result and affect the participant to complete the tasks in the experiment. The recruitment process included in-person invitations around campus and social media messages to reach diverse participants.&#13;
Although only 30 participants were recruited in this experiment, the results of the t-tests suggest that the effect size (see Experiment 1 result section) may be enough for testing the hypothesis.&#13;
Design&#13;
This study was an experimental within-subjects design. Participants simulated purchase experience in the suppression task and the control task without interference assigned to them. The independent variables were self-rating agreements on information search and alternative evaluation and participants’ average decision time in the suppression and control tasks. The dependent variables were inner speech frequency in five dimensions measured by the Inner Speech Frequency Questionnaire (VISQ). &#13;
Quantitative data were analysed using R to conduct t-tests, GLMM and CLMM. Secondly, qualitative data were collected through questionnaires and categorised into different variables to identify why participants made the decisions and their inner speech content during the purchasing process.&#13;
Overall, the experiment aims to investigate how people use inner speech during purchasing and whether Articulate Suppression task and task without interference influenced decision time and agreement score on information search and alternative evaluation.&#13;
Materials&#13;
Stimuli&#13;
Participants viewed six product sets (stimuli), which information was copied from the official website. To prevent participants from focusing on the effect of the products’ brands and prices (Albari &amp; Safitri, 2020), the products in each set were the same brand with similar or the same price, unisex, and recognisable, although these products might not exist or remain the latest information on the market.&#13;
Two-item Statement Questions (see Appendix B)&#13;
	Participants rated the two statements on a seven-point Likert score from strongly disagree to strongly agree (Maity &amp; Dass, 2014) to identify the Information Search and Alternative evaluation agreement level between tasks. Then, participants were asked: “Which product did you choose? Why?” after each purchasing decision.&#13;
Variety of Inner Speech Frequency Questionnaire (VISQ, see Appendix C)&#13;
The Inner Speech Frequency Questionnaire (Alderson-Day et al., 2018) included twenty questions asking participants to generally rate their inner speech frequency after the mock e-commerce purchasing tasks with a 7-point Likert scale ranging from "Never" to "All the time". Questions 7 and 15 were reversely coded; the value should be reversely calculated when doing analysis.&#13;
Experiment 1 Qualitative Questions (ExpQ1, see Appendix D)&#13;
After participants finished all the tasks (six decisions), they were asked to answer three questions at the end of the experiment. These questions gathered qualitative data about the participants’ experiences during the mock e-commerce purchasing tasks and what they had in mind. &#13;
Procedure&#13;
Figure 2 illustrates the diagram of Experiment 1. Participants were invited and consented to join the research to do Suppression and Control (without interference) tasks. &#13;
Each task contained three product sets; participants were asked to imagine and choose a product for themselves or a friend according to the provided information on the mock e-commerce channel (Maity &amp; Dass, 2014). The screen of the researcher or participants presented the information, including the price and details of the product set. Since these two tasks are counterbalanced and randomly ordered, participants repeated the decision-making process three times in the control task and the other three in the suppression task. After each decision, participants answered the two-statement questionnaire and explained which products they chose and why they chose them. According to different tasks, they started with the control task by themselves. However, they were asked to practise counting out loud from 1 to 4 following 160 bpm metronome sounds until the researcher ensured they remained suppressed before starting the suppression task.&#13;
Then, they answered VISQ, which measured their inner speech frequency and qualitative questionnaires (ExpQ1) to understand how they used inner speech when viewing the products in the last part of the study. &#13;
Analysis&#13;
R was used to analyse the quantitative data to identify the task differences via t-tests and the relationship between variables in two tasks via Generalised Linear Mixed Effect Models (GLMM) and Cumulative Link Mixed Model (CLMM). When conducting the GLMM with family gamma, the quantitative data will follow the standard procedure of data trimming and keep the trimmed data within 5% or 2.5 standard deviations (Berger &amp; Kiefer, 2021). &#13;
The qualitative coding scheme (See Appendix F) was created to identify what participants considered and what they said to themselves using inner speech during the experiment. The coding process involved re-reading the data to identify and assign relevant contexts to the appropriate categories. For example, if participants mention that they have used the product before, the value of the variable “Memory” increases by one unit. These variables were then calculated to identify what factors influenced participants’ purchasing decision-making more. Following the same coding scheme, what kind of inner speech was used when viewing the products could also be found. For example, people may ask themselves questions or repeat the product in mind.&#13;
In summary, Quantitative and qualitative data were analysed to report the results for different purposes and test the hypothesis in this research.&#13;
Experiment Optimising&#13;
The task without interference in Experiment 1 may not be a reasonable control task since it might include the secretary task effect, as participants were asked to do both tasks and be influenced after they did the suppression task when they were doing the control task. &#13;
As a secretary task, the finger-tapping task, which has been used in inner speech experiments, could be the better control task in Experiment 2 (Emerson &amp; Miyake, 2003; Wallace et al., 2009). Although Finger-tapping might influence working memory’s function and influence people to memorise (Armson et al., 2019; Kane &amp; Engle, 2000; Moscovitch, 1994; Rose et al., 2009), Rogalsky et al. (2008) also mentioned that the performance of people’s understanding of complex sentences might decrease but not as much as suppression occur. &#13;
Therefore, doing the second experiment was motivated to replicate the results with a better control condition involving Finger-tapping.&#13;
Experiment 2 &#13;
Participants&#13;
Based on the findings of Experiment 1, another 30 participants were recruited with the duplicate requirements as the first experiment. The recruitment requirement and process were the same as in the previous experiment.&#13;
Design&#13;
The independent variables were similar to Experiment 1, while the only difference was that the control task here had been changed into the Finger-tapping task. The goal of the whole design is to replicate the results of Experiment 1 to investigate the role of inner speech in the purchasing process.&#13;
Materials&#13;
Experiment 2 applied the same materials used in Experiment 1. The only difference was the qualitative questions after tasks. In Experiment 1, participants answered “Experiment 1 Qualitative Questions” at the end of the experiment. However, to better understand the difference between tasks, they were asked to answer a similar questionnaire (see below) after each task to discover the inner speech used in the two tasks.&#13;
Experiment 2 Qualitative Questions (ExpQ2, see Appendix E)&#13;
Participants were asked to answer three questions about their experiences during the mock e-commerce purchasing tasks and what they had in mind for the Suppression and Finger-tapping tasks separately.  &#13;
Procedure&#13;
The procedure was the same as the first experiment, except for adjusting the control task and the order of the qualitative questionnaire (ExpQ2). Figure 3 illustrates that participants were invited to the experiment using the same stimuli, similar questionnaires, and the same method of presenting stimuli (participants joined in person or via online platforms) with Suppression and Finger-tapping tasks. Participants were asked to practice counting 1,2,3,4 out loud or tapping their index, middle, ring, and little fingers in order (see which task came first) following metronome beats at 160 bpm before the researchers decided to move on. They were asked to view the product set by imagining choosing one for a friend or themselves three times in each task. Participants answered two statements and answered what product was chosen and why after each decision they made. Then, they were asked to answer three Qualitative questions (Appendix E) after each task. They repeated another task in the same process afterwards with a 2-minute break between tasks. After they finished the Finger-tapping and Suppression tasks, they answered VISQ questions at the end of the experiment.&#13;
Analysis&#13;
R was also used to analyse the quantitative data for the same purposes and followed the same data-trimming procedure if needed. The same coding scheme was followed to generate the result that could replicate and optimise the clarity of the Experiment 1 results. Overall, the second experiment is to generate the same or more evident results as Experiment 1 and to find more valuable information for the different inner speech used between tasks.&#13;
In conclusion, these two experiments and the analysis might give this research a deeper understanding of inner speech and its role and provide more precise information on how inner speech may related to the purchasing process.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3798">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3799">
                <text>The data set is in csv format.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3800">
                <text>Wang2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3801">
                <text>Melanie Thomas&#13;
Vickie Huang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3802">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3803">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3804">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3805">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3806">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3807">
                <text>Dr Bo, Yao</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3808">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3809">
                <text>Cognitive &#13;
Development </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3810">
                <text>60 participants &#13;
30 in experiment 1&#13;
30 in experiment 2</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3811">
                <text>Linear mixed-effects modelling, Power Analysis, Qualitative, Regression, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="191" public="1" featured="0">
    <collection collectionId="8">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="191">
                  <text>Ratings</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="192">
                  <text>Studies where participants make a series of ratings or judgements when presented with stimuli</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3812">
                <text>The Effects of Posture on Body Part Width Representations </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3813">
                <text>Lettie Wareing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3814">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3815">
                <text>Despite the ubiquity of our bodily experiences, our representations of our body’s size are not geometrically accurate. For example, when estimating the length of body parts using the hand as a metric, consistent patterns of distortions across body parts are observed. Given the presence of these distortions, some have proposed that representations of length and width emerge directly, or indirectly, from the organisation of somatotopic maps in somatosensory cortex, rather than from their actual relative dimensions. However, whilst length representations are well researched with respect to this notion, less is known about representations of body part width across the body. Moreover, it is unclear from previous research whether body part width representations may be confounded by participants’ posture. Specifically, individuals have shown an enhanced tendency to overestimate body part width when seated upon a chair, suggesting that the chair may become incorporated into the body representation. Consequently, the aim of the current investigation was to further elucidate how width is represented across body parts and whether posture moderates these representations. Participants estimated how many hands widths made up the width of the back, shoulders, hips, torso, thigh, and head in one of three conditions: standing (n = 37), seated upon a chair (n = 33), or seated upon a backless stool (n = 39). Whilst estimates did differ across body parts, no effect of posture was observed. Moreover, the patterns of distortions observed differed from those seen in previous investigations. Results therefore indicate that body part width representations are neither accurate nor fixed, rather, they show distortions which vary across individuals and contexts. It is proposed that inter-individual heterogeneity in width representations may result from humans possessing alternative perceptual mechanisms for judging aperture passability. Therefore, maintaining fixed width representations is unnecessary, and hence too energetically costly to maintain.&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3816">
                <text>Body perception, affordances, somatosensation, visual perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3817">
                <text>Method&#13;
Participants&#13;
Ethical approval for this study was obtained from Lancaster University Psychology Department on 31st May 2023.&#13;
As this study aimed to investigate body part width representations in healthy populations, only participants aged 18-55 years without any physical, or mental impairment were included in the study. However, as previous research (Readman et al., 2021) using the same paradigm for length estimates has shown no influence of anxiety or depression on body part estimates, participants with diagnoses of these conditions were not excluded. Participants were excluded if they had any current or historic diagnosis of cognitive impairment, as this can affect instruction comprehension (Han et al., 2011), or visual impairment, to ensure difficulties in seeing the body parts did not confound findings. Furthermore, given the associations between other psychiatric impairments (e.g., Priebe &amp; Röhricht, 2001), neurological impairments (e.g., Blanke et al., 2004), or eating disorders (Mölbert et al., 2017) with distorted body perceptions, individuals with a current or historic diagnosis of a condition falling within any of these categories were excluded. &#13;
A total of 123 (61 females) participants ranging from 18 to 68 years (M = 28.80 years, SD = 10.79) were recruited via opportunity sampling for this study. Participant recruitment was ended before the required N = 150 due to time constraints. All participants were entered into a draw to win one of two £25 vouchers as an expression of goodwill. A total of 15 participants were excluded for failing to meet the inclusion criteria, leaving a final sample of N = 108 (50 females). Participants were aged 18 to 55 years (M = 27.98 years, SD = 9.56); the majority of participants were right-handed (n = 99) and over half the participants had normal vision (52.78%), with the remaining participants having corrected-to-normal vision. &#13;
Reasons for exclusion included a current or historic psychiatric impairment (n = 2) or eating disorder (n = 4), falling outside the study age restrictions (n = 3), visual impairment (n = 2), being pregnant (n = 1), failing to provide demographic information needed to determine eligibility (n = 2), and a self-reported misunderstanding of task instructions (n = 1). &#13;
Design&#13;
This study constituted a 3x6 mixed design with condition (standing, chair, or stool) as the between-subjects variable and body part (torso, hips, shoulders, back, thigh, or head) as the within-subjects variable. The dependent variable was participants’ accuracy ratios for each body part (actual size/ estimated size) where an accuracy ratio of over 1.0 indicated overestimation, and under 1.0 indicated underestimation of body part width.&#13;
Materials and Procedure&#13;
After providing their consent, participants completed a self-report demographic and clinical questionnaire administered via Qualtrics (Qualtrics, Provo, UT) which asked about participants’ age, biological sex, preferred hand, and details regarding their neurological, cognitive, and psychiatric history.&#13;
Following this, participants were randomised to one of the three conditions (Standing, Chair, or Stool). In each condition, participants were asked to estimate how many hand widths of their dominant hand made up the width of six different body parts: the torso, shoulders, hips, back, head, and thigh. Participants were instructed to be as accurate as possible, using fractions where necessary. They were asked to refrain from touching the body part with their hand, or basing estimates off estimates for previous body parts if the two body parts were proportionally related. The researcher defined each body part verbally and pointed to their endpoints on their own body prior to the participant making their estimate. &#13;
Participants in the standing condition performed all estimates whilst stood upright, without leaning on any surfaces. In the chair condition, participants were seated upon a standard desk chair with a high back and no arm rests. In the stool condition, participants were seated upon a fixed height bar stool with no back. The condition completed by participants was counterbalanced, and the order of body parts estimated was randomised.&#13;
After making their estimates, the researcher used a soft tape measure to measure the actual width of the cued body parts before debriefing participants. The study took around 10 minutes to complete.&#13;
Analysis&#13;
Prior to conducting the analysis, outliers were removed using the median absolute deviation (MAD) approach. This procedure involves removing participants whose accuracy ratios deviated more than three absolute deviations from the median for a given body part. The MAD approach was chosen as it is more robust than traditional methods of outlier detection based upon standard deviations from the mean (Jones, 2019; Leys et al., 2013).&#13;
To calculate the dependent variable of accuracy ratios, first, participants’ hand estimates for each body part were converted to centimetres by multiplying their estimate in hands by their measured hand width. After this, estimates for each body part were divided by the actual width of the body part to produce an accuracy ratio. &#13;
To test the study hypotheses, data was analysed using a 3x6 mixed ANOVA using the rstatix package available from RStudio (Version 4.2.1). Body Part was entered as the within-subjects variable, and Condition as the between-subjects variable. The assumption of normality was checked using the Shapiro-Wilks test, and the sphericity assumption via Mauchly’s test. Partial eta-squared was used as a measure of effect size.&#13;
Though frequently used in analysis, frequentist statistics are not without limitations. It is typically assumed that a p-value of &lt;.05 is evidence for the alternative hypothesis, however this value only represents the probability of obtaining results as extreme as those observed, if the null is true (Wagenmakers et al., 2018). Therefore, data which is unusual under the null hypothesis is not automatically any less unusual under the experimental hypothesis (Wagenmakers et al., 2017). Moreover, a non-significant finding in frequentist analyses cannot be taken as evidence in favour of the null hypothesis (Kruschke &amp; Liddell, 2018). In this regard, Bayesian statistics have several advantages over frequentist statistics including the ability to incorporate prior knowledge, quantify the degree of uncertainty surrounding the existence of an effect, and the ability to quantify the strength of evidence in favour of the null, or alternative hypotheses (see Wagenmaker et al., 2018 for a discussion). &#13;
Consequently, to provide further support for conclusions drawn using frequentist analyses, a Bayesian Mixed ANOVA was conducted using the anovaBF function from the BayesFactor available in RStudio (Version 4.2.1). Default priors were used given that these reflect average effect sizes observed across all psychological experiments, and hence are likely to be more reliable than priors drawn from a single, potentially methodologically flawed, study (Rouder et al., 2012).  &#13;
Where a significant main effect of Body Part or Condition was observed, Holm-Bonferroni adjusted frequentist, and Bayes Factor, pairwise t-test comparisons were conducted to determine the pattern of differences underlying these effects. &#13;
In addition, to determine whether body part width estimates differed significantly from 1.0 (i.e., an unbiased estimate), Holm-Bonferroni adjusted frequentist, and Bayes Factor, one-sample t-tests were conducted for each body part. &#13;
To judge the strength of evidence provided by the Bayes Factor analyses, Kass and Raftery (1993) criteria was used. By this criteria, Anecdotal evidence is regarded as inconclusive. Percentage error (a measure of certainty in the estimate) was reported alongside Bayes Factors, where &lt;20% is regarded as an acceptable level of uncertainty (Van Doorn et al., 2021).&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3818">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3819">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3820">
                <text>Wareing2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3821">
                <text>Leanna Keeble</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3822">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3823">
                <text>None </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3824">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3825">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3826">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3827">
                <text>Dr Sally Linkenauger</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3828">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3829">
                <text>Cognitive, perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3830">
                <text>123</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3831">
                <text>ANOVA, Bayesian Analysis, T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="192" public="1" featured="0">
    <fileContainer>
      <file fileId="212">
        <src>https://www.johnntowse.com/LUSTRE/files/original/7d6c9cf5fdd98d716c94e889c243c0c0.pdf</src>
        <authentication>fa4b33e4b92ee93a65616bbab7185e5c</authentication>
      </file>
      <file fileId="213">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f11ffa6a464eee8a38144f043e6d8a06.pdf</src>
        <authentication>9e37ad79ac89170b5ec0237b8d9230f6</authentication>
      </file>
      <file fileId="214">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8093b4f91fa9d0452695e80ef3ecf6eb.pdf</src>
        <authentication>671adccd1d64ac672834905ab18a0ce2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="3">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="181">
                  <text>EEG</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="182">
                  <text>Electroencephalography (EEG) is a method for monitoring electrical activity in the brain. It uses electrodes placed on or below the scalp to record activity with coarse spatial but high temporal resolution</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3832">
                <text>N1 Adaptation: Exploring the Neuronal Basis of the Interaction Between Auditory Sensory Memory and Attention</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3833">
                <text>Gengjie Jack Ho</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3834">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3835">
                <text>The aim was to explore whether voluntarily focusing on repetitive auditory stimuli influences the lifetime of N1 adaptation, which indexes the lifetime of auditory sensory memory. Twenty-six neurotypical participants with self-reported normal hearing were recruited from Lancaster University. Electroencephalogram (EEG) recording took place in a sound-attenuated laboratory. A two-by-two factorial design was employed, where one factor manipulated the presence or absence of attention, whereas the other factor manipulated the stimulus-onset interval (SOI), which primarily served to calculate the lifetime of adaptation. Three different amplitude measurement methods were used to calculate the N1 amplitude, therefore three sets of statistical analyses were performed for each investigation. For the preliminary investigation, two-way ANOVAs were conducted to evaluate the impact of attentional focus (presence or absence) and SOI (short or long) on the amplitude of N1. For the primary investigation, paired-samples t-tests were conducted to evaluate whether the presence or absence of attention influences the N1 adaptation lifetime. The preliminary results indicated no significant difference in N1 amplitude between the presence and absence of attentional focus. There was also no significant difference in the SOI, except for one of the amplitude measurement methods, which showed greater N1 amplitudes in the Long SOI condition. The primary results indicated that whether attention was present or not showed no significant effect on the adaptation lifetime across all three amplitude measurement methods. However, the study suffered from low statistical power and possible issues with the methodological design due to the combined use of visual and auditory modalities to manipulate attentional focus. Therefore, it is inappropriate to draw conclusions from the findings of this study. Methodological improvements and theoretical implications were discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3836">
                <text>neuropsychology, attention, auditory sensory memory, N1 adaptation, sensory processing, neural responses</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3837">
                <text>Methods Section:&#13;
Participants&#13;
Twenty-six neurotypical participants with self-reported normal hearing (9 males, 16 females, 1 prefer not to say), all of whom were students from Lancaster University, were recruited using opportunity sampling via advertising on social media platforms and SONA. The age range of the participants spanned from 18 to 34 years (M = 22.85, SD = 2.55). Sixteen participants were excluded due to excessive electric noise, resulting in a remaining pool of 10 participants. All participants provided written consent and volunteered to participate in the experiment. The study received ethical approval from Lancaster University’s Department of Psychology.&#13;
Stimuli&#13;
The experiment employed the oddball paradigm to elicit auditory responses. The standards were presented at a constant rate of 210 repetitions per condition, while the deviants appeared unpredictably at a 5% probability (10 deviants per condition). The sequence of standards and deviants remained consistent across all conditions. The standards were presented as a 500-Hz pure tone, while the deviants were a 503-Hz pure tone. The duration of each tone was 100 milliseconds, with 10 milliseconds of linear onset and offset ramps. All tones were presented at a consistent and comfortable volume level (28% volume on Windows 10). The auditory stimuli were programmed and delivered using MATLAB.&#13;
Design&#13;
The study followed a two-by-two factorial design (see Figure 1). It included two attention conditions: Active and Passive. In the Active condition, participants were presented with a stream of standards and deviants while focusing on a fixation cross. Their objective was to count the occurrences of deviants. In the Passive condition, participants viewed a nature documentary displayed on a smartphone screen. Their objective was to count the number of animal species featured in the documentary while ignoring the stream of auditory stimuli playing simultaneously in the background. Both the fixation cross and the smartphone screen were positioned one metre in front of the participants. Additionally, there were two SOI conditions: Short SOI (1.7 seconds) and Long SOI (3.4 seconds). The oddball paradigm was integrated into a stimulus block design - with two types of stimulus blocks, each having a specific SOI. Note that the order of the conditions was randomized among participants.&#13;
The purpose of the design was to manipulate attention towards repetitive auditory stimuli and calculate adaptation lifetime. The counting tasks in the Active and Passive conditions manipulated attentional focus. In the Active condition, the count-the-deviants task aimed to maintain participants’ attention on the repetitive auditory stimuli. In the Passive condition, the count-the-animal-species task aimed to divert participants’ attention away from the repetitive auditory stimuli using visual stimuli in the form of a nature documentary. Additionally, the counting tasks served as a quality control measure, excluding participants whose answer substantially differed from the correct answer. Conversely, the inclusion of both short and long SOI measured adaptation lifetime using the amplitude ratio (explained below in Data Analysis).&#13;
 Figure 1. A visual representation of the study’s two-by-two factorial design, encompassing four distinct conditions: Active with Short SOI (1.7s), Passive with Short SOI (1.7s), Active with Long SOI (3.4s), and Passive with Long SOI (3.4s).&#13;
Procedure&#13;
EEG was used as the method of data collection. The Enobio NIC2 suite recorded EEG data, using three dry electrodes (Fpz, Cz, and Fz) to capture neuroelectrical activity in the auditory cortex (Neuroelectrics, n.d.). Data recording was conducted in a sound-attenuated laboratory. The entire experiment lasted approximately 60 minutes, which included a 20-minute preparation period.&#13;
Before the experiment, participants were sent an information sheet online and completed a consent form upon arrival. They were then fitted with an electrode cap and headphones, and instructed to avoid excessive movement during recording to minimise muscle artifacts. When recording was ongoing, participants were verbally given instructions at the start of each condition, and they were asked about their answers to the counting tasks after each condition. Short breaks were allowed when transitioning between conditions. After the experiment, participants were inquired about their age and gender, and received a verbal and written debrief regarding the true purpose of the study.&#13;
Data Analysis&#13;
We conducted a priori power analyses using G*Power 3.1. to determine the required sample size for testing the two hypotheses (Faul et al., 2007). For the preliminary investigation, results indicated that the required sample size to achieve 80% power for detecting a medium effect, at a significance criterion of α = .05 was N = 36 for a two-way ANOVA. For the primary investigation, results indicated that the required sample size to achieve 80% power for detecting a medium effect, at a significance criterion of α = .05 was N = 34 for a paired-sample t-test. Our recruitment target of 36 participants was based on the larger of the two required sample sizes.&#13;
In data preprocessing, we discarded the first few trials from each condition to minimise initial variability in orienting and habituation effects, and excluded any unidentifiable N1 responses.&#13;
Measuring the N1 amplitude is essential for estimating adaptation lifetime and conducting the planned data analysis. There are three methods available - N1, N1-P2, and mean voltage displacement. Notably, baseline correction was performed as a standard initial procedure, addressing a baseline that extended over 100 milliseconds within this experiment. The first method identifies and measures the N1 amplitude as the point of maximum negativity (Marton et al., 2018). The second method measures the peak-to-peak amplitude difference between N1 and P2, as it captures the relationship between the two and avoids the problem of a noisy baseline by not depending on the pre-stimulus baseline (Al-Abduljawad et al., 2008; Scaife et al., 2006). The third method estimates the mean voltage displacement (absolute amplitude value) over a specific time frame, particularly useful when the N1 component is difficult to identify, or the stimulus onset is ambiguous (Hoehne et al., 2020; Komssi et al., 2004). All three methods were employed to conduct a more comprehensive data analysis, given that consistent findings across different methods increase the reliability of results and inconsistencies can guide further investigation.&#13;
In the traditional approach for estimating adaptation lifetime, one uses multiple stimulus blocks, each featuring varying SOIs ranging from 0.5 to 10 seconds. The ERP is derived separately for each stimulus block, and notably, the peak N1 amplitude is plotted as a monotonically increasing function of SOI. This relationship between the N1 amplitude and the SOI can be described as an exponentially saturating function, represented by the model equation A(1-e-(t-to)/τ), where A (amplitude), τ (time constant), and to (time origin) represent fitting parameters (Lü et al., 1992). Graphically, one fits the exponentially saturating curve to the measured N1 amplitudes. Here, the fitting parameter τ characterizes the steepness of the curve in seconds. τ signifies the SOI at which the amplitude curve reaches 66% of its way towards the saturation limit, indicating the lifetime of adaptation. However, this method is time-consuming and difficult for participants, insofar as boredom-induced mind wandering may confound the effects of attentional focus (Eastwood et al., 2012; Meier et al., 2023).&#13;
An alternative approach of amplitude ratio only used two stimulus blocks with contrasting SOIs. By graphically plotting the amplitude ratio of a short SOI against a long SOI over a range of τ values (measured in seconds), it shows that the ratio is a monotonically increasing function of τ. Although this ratio-to-τ relationship is not strictly linear, it can be used to estimate the adaptation lifetime rather than the conventional time constant, given that the ratio increases as τ increases. In practical terms, both SOI conditions produced a clear difference in amplitude. The short SOI of 1.7 seconds ensures a distinct ERP with an observable N1 component (if the SOI is less than 300 milliseconds, it would render the N1 response too minute and difficult to observe), while the long SOI of 3.4 seconds brings the N1 amplitude closer to its saturation limit. By shortening the experiment duration, this ‘dimensionless’ measure addressed the limitations of the traditional approach without significantly compromising estimation accuracy.&#13;
Two-way ANOVAs were conducted to assess how the N1 amplitude is influenced by attentional focus (presence or absence) on repetitive auditory stimuli and SOIs (short or long).&#13;
Paired samples t-tests were conducted to assess if the presence or absence of attentional focus on repetitive auditory stimuli significantly affects adaptation lifetime (calculated via amplitude ratio).&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3838">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3839">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3840">
                <text>Ho2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3841">
                <text>Sharon Boyd</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3842">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3843">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3844">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3845">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3846">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3847">
                <text>Patrick May</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3848">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3849">
                <text>Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3850">
                <text>Participants: 26&#13;
Excluded Participants: 16&#13;
Final Sample: 10 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3851">
                <text>ANOVA, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="193" public="1" featured="0">
    <fileContainer>
      <file fileId="215">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2d733bde1c35f66edba319392e339771.pdf</src>
        <authentication>bcd96b51fb4c89cefd082eb9845b288a</authentication>
      </file>
    </fileContainer>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3852">
                <text>Investigating infant expectation on object search tasks</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3853">
                <text> Leah Murphy</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3854">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3855">
                <text>The current study aims to distinguish between Piaget’s (1954) theory of object understanding, highlighting the role of object permanence on A not B task performance, and Diamond’s (1985) theory highlighting the role of motor demands and lack of ability to inhibit habitual behaviours during the task. These two theories differ in their predictions for the expectations of the infants taking part, with Piaget (1954) predicting that infants’ lack of object permanence causes poor performance on the task and Diamond (1985) predicting that infants understand the movement of objects and a lack of inhibition of habitual behaviours cause error in performance. We tested 15 nine-month-old infants on a looking version of the A not B task. The use of impossible and possible outcomes was also incorporated on B trials, with the object being revealed from either the correct or incorrect location (e.g., see Ahmed &amp; Ruffman, 1998). Infant first look direction, accumulated looking time during trials and the number of social looks initiated post-outcome, were used as measures. We found significant evidence of the ‘AB’ error during trials, with a significantly increased number of incorrect first looks on B trials. There was also a descriptive pattern showing surprise at object location reveals with increased number of social looks during B compared to A trials, though this was not significant. Accumulated looking analysis showed that infants looked longer on A than B trials, suggesting that infants expected the object to be in location B on B trials, demonstrating infants’ ability to understand objects and supporting Diamond’s (1985) theory. However, implications for a small sample size and presence of individual differences on interpretation of looking time data are discussed. Implications in theory and future research are suggested and overall, results provide support for the application of Piaget’s (1954) theory and suggest that infants have limited object understanding based on their displayed expectations during testing.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3856">
                <text>3.1. Participants&#13;
In this study, 15 participants took part, aged 8 months and 12 days to 9 months and 27 days old (M = 9 months and 3 days, SD= 11.3 days). Six further infants were excluded from data analysis as they became too fussy to complete the study. Participants were recruited from the Lancaster Baby lab database, along with the Lancaster Baby lab Facebook page and were also recruited via word of mouth from guardians taking part in the &#13;
study. &#13;
3.2. Materials&#13;
The video stimuli were created using Canva software (Canva.com, 2023) and was uploaded onto ‘Habit 2’ software (see Oakes et al., 2019) to display the stimuli during testing and to measure the accumulated looking time of the infant participants. The stimuli involved a novel object obtained from the NOUN database (Horst &amp; Hout, 2016). A camera was used to record the social looks exchanged between the infant and guardian, as well &#13;
as the direction of the infants’ first looks during testing. &#13;
3.3. Design&#13;
This study had a within-subjects design, with all participants being exposed to the same experimental conditions and the same stimuli. To counterbalance for location effects, half of the participants witnessed A trials being hidden in the box on the left, whilst the other half witnessed the object being hidden in the box on the right during A trials. The presentation of the accurate and inaccurate B trials was further counterbalanced across participants, as half of the participants viewed the inaccurate B trials first, and the other half viewed the accurate B trials first.&#13;
3.4. Ethical approval&#13;
Ethical approval for this study was granted by the departmental ethics committee (DEC) at Lancaster University. Guardians were recruited via their preferred contact method and were sent the participant information sheet to read before agreeing to take part in the study. A date and time of testing was arranged at the Babylab building at Lancaster University, via telephone or email. Upon arrival, guardians were presented with the consent form to sign and initial all points before being allowed to take part. They were also given the opportunity to ask any questions about the study and were informed that they could withdraw at any time. &#13;
After the study, the guardian received a five-pound contribution to travel costs, along with a free children’s book for the infant, as a reward for taking part in the study. The guardian also received a debrief sheet to read and to take home, providing them with all contact information of the lead researcher, if they wished to ask any questions or to withdraw from the study. &#13;
3.5. Procedure&#13;
The testing took place in a private room within the Whewell building at Lancaster University. The infant and guardian were sat in front of a computer screen with the infant sat in a highchair positioned directly in front of the screen, and the guardian sat in a chair to the side, slightly behind the infant (to allow researchers to see clearly when the infant initiated a social look). The experimenter sat behind a divider at a computer, out of sight of the infant and guardian. A social engagement video of the experimenter saying, “Let’s hide the blap, can you find the blap?” was presented to the infants at the start of the experiment and between each trial, to insert social communication and guide the attention of the infant to the screen before the stimuli were presented. The infant then watched a series of video stimuli in which a novel object appeared on the screen and moved into one of two boxes, both boxes were then covered (the object was hidden), and a there was a delay period of five seconds (see figure 1). After the delay period, both boxes were revealed, and the location of the toy was visible to the infant. Any movement of the object was accompanied by a sound to guide the attention of the infant to the object, but this sound was not present when the object was revealed to avoid any leading factors when measuring infant expectation. Instead, the occluders made a simple “whoosh” sound when they were removed, to ensure the infant was paying attention. After five identical A trials, the object was then hidden in the second location and the process was repeated consisting of six B trials. However, during the B trials, the object was hidden in the second location, but was either revealed to be in the correct (accurate) or incorrect (inaccurate) location (see figure 2). This variation in outcome was presented alternately to the infant, with the object being revealed from the incorrect location for three out of the six B trials. The study lasted for approximately 10 minutes per participant.&#13;
3.6. Behavioural coding&#13;
Infant looking time was coded online as trial lengths were infant controlled. Each trial ended when the infant looked away for four seconds. As this controlled the trial length, this was not double coded as this inherently will lead to a high agreement level. For the coding of infant first look and number of social looks, the videos recorded of the participants were saved and uploaded onto Microsoft OneDrive to be offline coded. First look was defined as the direction that the infant first looked towards once the occluder was removed and the object was revealed. On trials where the infant was not looking as the occluder was removed, the first look was defined as the direction in which they looked once their gaze returned to the screen. The first look direction was coded as correct and incorrect. The number of social looks initiated by the infant per trial was also measured during coding, defined by the infant turning towards the guardian during each trial after an outcome was revealed. Twenty percent of the videos were dual coded and there were no discrepancies between researchers during the dual coding process for first looks (r = 1, p&lt;0.01) or social looking (r= 1, p&lt;0.01).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3857">
                <text>.xslx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3858">
                <text>Shiyu Pang&#13;
Yuewen Qin</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3859">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3863">
                <text>dataset</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3864">
                <text>Chi-squared, Correlation, Factor analysis, Linear mixed effects modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3866">
                <text>Murphy(2023)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3867">
                <text>open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3868">
                <text>none</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3860">
                <text> Kirsty Dunn</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3861">
                <text>In this study, 15 participants took part, aged 8 months and 12 days to 9 months and 27 days old (M = 9 months and 3 days, SD= 11.3 days). Six further infants were excluded from data analysis as they became too fussy to complete the study.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3862">
                <text>Chi-squared, Correlation, Factor analysis, Linear mixed effects modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3870">
                <text>Cognitive - developmental, Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="194" public="1" featured="0">
    <collection collectionId="11">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="987">
                  <text>Secondary analysis</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3871">
                <text>Third Parties and Police Use of Lethal Force: Evidence from the Mapping Police Violence Database </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3872">
                <text>Sian Reid</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3873">
                <text>6th September 2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3874">
                <text>Over recent years media coverage has highlighted the use of excessive force by some police officers. The use of lethal force towards black and other ethnic minority citizens has been identified as a cause for significant concern. Research in the bystander literature and in non-fatal force policing contexts has identified that third parties can have positive impacts in reducing the severity of these incidences. The role of third parties in fatal force events, however, has not been investigated. This is something which the current study seeks to address. The Mapping Police Violence database was used to identify a year’s worth of lethal force events in the US. Newspaper articles relating to these incidents have been coded in line with a predefined coding framework to examine the presence of third parties in these incidents, and the nature of any social relationships with third parties in relation to the type of lethal force utilised. The results revealed that third parties were present in just under half of incidences and that the presence of a third-party with a pre-existing social relationship to the citizen was associated with a lower likelihood of officers utilising forms of ‘less lethal’ force to the extent that it results in a citizen fatality. These findings highlight the potential importance of third parties in understanding the nature of lethal police citizen interactions, and also the potential protective role the presence of known others may have in reducing the likelihood of officers excessively utilising forms of less lethal force. &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3875">
                <text>Lethal force, Third Parties, Police Citizen Interactions, Use of Force</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3876">
                <text>A secondary data analysis was utilised to examine the presence of third parties in incidences of police use of lethal force. The Mapping Police Violence database (Mapping Police Violence, 2020) was the primary dataset utilised for the study. This is a freely available and open public database compiled by researchers in the US which aims to provide a record of all police involved deaths in the US. This database has been recording police involved deaths in the US since 2013, primarily gathering information through news articles published by various American news outlets. The type of force engaged in by officers that resulted in death was utilised as the outcome variable. The predictor variables were the presence of third parties, the presence of any known third parties, or unknown third parties, the number of officers present, the presence of other emergency services, the location of the incident, the race of the citizen, the gender of the citizen, the alleged presence of a weapon, the initial reason for the encounter, the presence of any digital technology capturing the event and the level of threat posed to the officer. &#13;
The Mapping Police Violence database records multiple variables in relation to these incidences, including individual and situational factors. Several of the predictor variables included in the current study have been gathered from this dataset; specifically, the type of lethal force used, the alleged presence of a weapon, the race of the citizen, the gender of the citizen, the level of threat posed to the officer, the initial encounter reason and the presence of a body worn camera. Within the current study, most of these variables have been used as recorded in the dataset, however, the level of threat posed to the officer has been recategorized. The multiple different levels of threat recorded in the dataset have been regrouped into three categories: attack (indicating the greatest level of threat to the officer), other (referring to any other level of threat), and none (for incidences in which it was clear there was no threat to the officer). In the original data only the presence of a body worn camera is recorded. For the current study this variable has been transformed to include the presence of any digital technology capturing the event, such as CCTV or smartphones, as research has found that the presence of any digital technology and not only a body camera can affect police citizen interactions (Shane et al., 2017). &#13;
The Mapping Police Violence database records the citizen’s cause of death in relation to the type of force utilised. In incidences where multiple types of force have been identified as contributing to the citizen’s death, the database records a list of all types of force involved. The types of force included in the database include gun, taser, pepper spray, baton and physical restraint. For the current study, these types of force have been grouped, to provide an outcome variable with fewer levels. The grouping of the outcome variable has been done in line with previous research looking at police use of force, which identified a gun as a distinct type of force due to the increased risk of lethal outcomes. The other types of force are grouped into a second category of other types of ‘less lethal’ force, as these types of force have been identified as alternatives to the use of a gun, which would be expected to reduce the likelihood of a citizen fatality (Sheppard &amp; Welsh, 2022). In incidences where multiple types of force were used, the most severe form of force has been recorded; for example, if the cause of death is attributed to a gun and a taser, then this incident would be recorded as a gun as the type of lethal force utilised.&#13;
The dataset contains links to the news articles which have been used to gather information regarding each of the individual police involved death incidences. The variables included in the current study relating to the presence of others were gathered by coding these news articles which are linked in the database to the individual incidences of police involved deaths between 6th March 2022 – 6th March 2023, providing a sample of 1,257 police involved deaths. News articles are a source of information which have been identified as having certain limitations, particularly relating to potential media bias in the reporting of crime related stories (Lawrence, 2000). Research looking at the reporting of police use of force incidences by newspapers, however, has found that for many factors there was consistency between news reports and police reports of the same incidents (Ready et al., 2008). For the current study, news articles are utilised due to the promise they provide in allowing the events of police involved deaths to be examined in relation to the presence of third parties. &#13;
To identify the relevant incidences for the current study, three primary exclusion criteria were applied prior to the coding of the news articles. Firstly, to identify incidences with news articles with sufficient information to allow the presence of third parties to be examined, a minimum word count of 150 words was required in at least one of the associated news articles. Secondly, as the study’s primary interest was in the use of lethal force, which involves an on-duty officer using force, only incidences relating to on duty officers were included. Finally, incidences in which the use of force by the officer was accidental, such as car crashes that police officers were involved in, were excluded, as these events have different characteristics to those in which officers intentionally engage in the use of force towards a citizen. The application of these exclusion criteria left a sample of 1052 incidences of police use of lethal force.&#13;
To investigate the presence of others in these incidences, prior to the analysis a predefined behavioural coding scheme (Philpot et al., 2019) was created and applied to the news articles to capture the presence of third parties. This coding scheme contained 12 individual items capturing the presence of third parties and any social ties between third parties and the citizen involved in the incident (See Appendix A for the full coding scheme). Two additional items were included to capture the presence of multiple officers or other emergency services. One code regarding the location of the incident was also included to capture whether it occurred in a public, semi-public or private location. Each of the items were coded for presence with a 1, their absence recorded with a 0, or if it was not clear whether this item was present a 99 was recorded. In total 15 codes were included in this behavioural coding scheme. Here are some examples of these codes relating to the presence of third parties:&#13;
“The presence of a third-party with a pre-existing social connection to the primary citizen involved”&#13;
“The presence of more than one officer”&#13;
“The presence of a third-party with no pre-existing social connection to the primary citizen involved”&#13;
To facilitate the process of coding the news articles in line with the coding scheme, a Qualtrics survey (https://www.qualtrics.com) was created. This survey presented the individual items within the coding framework in a questionnaire format, allowing the items to be coded in the format of closed ended responses to questions relating to the presence of third parties. The responses from the survey were then transferred to an Excel document to allow the data to be prepared for analysis. &#13;
Ethical approval has been obtained for this study. The study has been reviewed and approved by a member of the Lancaster University Psychology Department, the ethics partner of the supervisors. &#13;
The reliability of the coding scheme and its application to the news articles was assessed through the double coding of 10% of the sample by a second researcher separately to the primary researcher. To assess the level of agreement between the two researchers for each variable, Gwet’s AC1 (Gwets, 2014) coefficient was calculated. In line with the recommendations of Landis and Koch (1977), the resulting coefficients were interpreted in the following way: a value of 0.4 or above indicating moderate agreement, a value of 0.6 or above indicating substantial agreement, and finally a value of 0.8 or above, indicating almost perfect agreement between raters’ scores. For 13 of the variables an agreement level of substantial or almost perfect was reached, as seen in table 1 (appendix B). For the variable relating to the third-party being a friend of the citizen there was no variation in responses (i.e., 100% agreement), and therefore a coefficient could not be calculated. For the location variable, only a moderate level of agreement was found, as a result this variable was excluded for the purpose of analysis. &#13;
Figure 1 depicts a flowchart of the process undertaken to sample the relevant incidences. The first part of the flowchart shows the initial process that was undertaken to identify all police involved deaths recorded in the Mapping Police Violence database in the prior 12 months. Following the initial data collection procedure descriptive statistics were run which highlighted that in the initial sample of 1052 incidences there was very limited variation in the outcome variable of the type of lethal force utilised by officers, with 990 incidences involving a gun as the primary cause of death, and only 62 incidences involving other forms of force. In this initial sample a citizen’s cause of death not involving a gun would statistically be considered a rare event, which would have presented challenges in utilising this variable as the outcome in any subsequent analyses. In line with the recommendations of research (Shaer et al., 2019), an oversampling approach was chosen to overcome the limitations of having a rare event in the outcome variable, with further incidences in the dataset that did not involve a gun as the cause of death being oversampled so at least 10% of the sample involved a cause of death other than a gun. As can be seen in figure 1, for these incidences to be as similar to the primary sample as possible, they were only sampled for the three preceding years to limit any additional sample variation that may have been introduced by sampling a wider date range. This led to the identification of a further 182 incidences where the citizen’s cause of death did not involve a gun. The same exclusion criteria were then applied to this sample, with a further 65 incidences excluded, leaving a sample of 117 additional incidences which were coded in line with the same procedure as the initial sample. This oversampling procedure led to a final sample of 1169 incidences. &#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
The data analysis involved chi square tests of independence, to examine whether the presence of others during fatal police citizen interactions had a statistically significant relationship with the outcome variable of the type of lethal force utilised by officers. Due to the exploratory nature of the study there was not a predicted direction or nature of the relationship between the predictor variables relating to third-party presence and the type of fatal force utilised by officers (McIntosh, 2017). Prior to the main analyses, descriptive statistics were run to investigate distributions within variables and to allow any rare event variables to be identified. &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3877">
                <text>Excel.csv&#13;
r_file. R&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3878">
                <text>Reid, 2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3879">
                <text>Open </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3880">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3881">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3882">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3888">
                <text>Charlotte Thompson</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3883">
                <text>Prof. Mark Levine and Dr. Richard Philpot</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3884">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3885">
                <text>Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3886">
                <text>1169</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3887">
                <text>Chi-squared</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="195" public="1" featured="0">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3889">
                <text>The Effect of Positive and Negative Emotional States on the Price Sensitivity to Green Fast-Moving Consumer Goods in the UK</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3890">
                <text>Oleksandr (Alex) Myroshnychenko</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3891">
                <text>08/09/2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3892">
                <text>Consumers are growing increasingly aware about the environmental consequences of their daily purchases, creating a potentially lucrative space for agile brands to leverage sustainable or green fast-moving consumer good (FMCG) production and increase revenue and profit. However, a fuller exploitation of this growth is impeded by the high costs of offering greener FMCG’s, which are passed onto consumers via higher prices, leading to a preference for cheaper, non-green FMCG’s due to price sensitivity. The purpose of this study was to investigate the power of positive and negative emotional states in reducing this price sensitivity and thus, increasing green FMCG buying behaviour. To induce the two emotional states, conventionally happy and sad video stimuli were utilised, followed by a fictional product selection between green and non-green FMCG’s. The research involved two phases. Phase 1 applied a qualitative method in the form of two focus groups (total n = 10) to test and enhance the general research procedure, while gathering additional insights regarding the overall study subject. Phase 2 integrated the refined procedure into a quantitative questionnaire, which involved a sample for each emotional state and a third sample as a control (total n = 300). The results demonstrated that neither positive nor negative emotional states had an overall significant influence on FMCG product selection. The discussion of the results includes insights from Phase 1 and provides recommendations for brands. The study limitations and future research directions are also presented. The research was conducted for and funded by Astroten – a behavioural science consultancy in London. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3893">
                <text>Psychology, behaviour, mood, pricing, green marketing, advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3894">
                <text>Methods &#13;
Overall Design &#13;
In accordance with the research questions and hypotheses, the methodology tested the effect of positive and negative emotional states, which were induced by two online videos. Each video contained copyright-free footage, one depicting humorous scenes (positive state), and the other depicting sadness (negative state). The impact of the videos’ mood-inducement was primarily measured by the outcomes of a fictional product selection in which participants chose between a more expensive green FMCG and a less expensive non-green FMCG. &#13;
Specific Design &#13;
The project comprised two data collection and analysis phases. Phase 1 was a pilot qualitative study in which the overall design described above was discussed/tested and a series of additional questions regarding the relationship between green and non-green FMCG’s, emotional state (mood), and price were asked. Hence, the utility of Phase 1 was two-fold. First, it obtained richer insights afforded by the inherent advantage of qualitative research over quantitative research, through an in-depth exploration of participants’ perspectives around their decision-making regarding more expensive green FMCG’s. Second, this same advantage also yielded direct feedback from the participants on the general procedure so that it could be refined for Phase 2 – an online quantitative questionnaire. The questionnaire’s purpose was to achieve a stronger empirical basis for the effects of positive and negative emotional states. &#13;
Phase 1: Pilot Qualitative Study &#13;
The qualitative approach for Phase 1 consisted of focus groups. This method was selected because focus groups could facilitate the dynamic development of ideas between participants in contrast to individual in-depth interviews. In the latter, certain thoughts and perspectives could have failed to emerge, and to introduce these thoughts and perspectives, the researcher would risk posing leading questions, affecting data validity. In addition, given the study timeframe, focus groups were deemed to be more feasible in terms of data collection and analysis. &#13;
Two focus groups (FG1 – approximately 70 minutes; FG2 – approximately 90 minutes) were conducted, each comprising 5 participants, split approximately by gender. The participants were post-graduate students from Lancaster University and were selected via opportunity sampling by posting the study details in a WhatsApp group chat for residents of the university’s Graduate College accommodations. Participants were each paid £15, and refreshments (snacks and beverages) were provided. &#13;
Regarding the mood-inducing stimuli for Phase 1, the positive emotional state stimulus was a video containing clips of monkeys performing comic or happy actions (e.g., reading a newspaper or jumping around) (see Figure 1). The video was sourced from the Nature ALL (2020) channel on YouTube and was copyright-free (https://youtu.be/YQ4xwK7_rUY). The video also included a copyright-free comedic soundtrack, which effectively accompanied the content of the video. The duration of the video was edited down to one minute to ensure sufficient emotional impact, while preventing excessive exposure, which could have led to boredom and, logically, impacted the desired emotional state.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3895">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3896">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3897">
                <text>Myroshnychenko(2023)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3898">
                <text>Dan Qiao</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3899">
                <text>open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3900">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3901">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3902">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3903">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3904">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3905">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3906">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3907">
                <text>Phase 1 (Qualitative) = 10; Phase 2 (Quantitative) = 300</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3908">
                <text>Correlation,Qualitative(Thematic Analysis),T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
