<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=13&amp;sort_field=added" accessDate="2026-05-03T13:21:21+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>13</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="173" public="1" featured="0">
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3503">
                <text>Does implicit mentalising involve the representation of others’ mental state content?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3504">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3505">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3506">
                <text>Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3507">
                <text>implicit mentalising, co-representation, joint action, domain specificity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3508">
                <text>Pre-test: Selection of Suitable Stimuli&#13;
Participants&#13;
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.&#13;
Stimuli and Materials&#13;
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.&#13;
Design and Procedure&#13;
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design. &#13;
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.&#13;
Main Experiment&#13;
Participants&#13;
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.&#13;
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52. &#13;
Stimuli and Materials&#13;
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians. &#13;
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).&#13;
Figure 1&#13;
Example of Stimuli Used in Simon Task &#13;
&#13;
Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.&#13;
Design and Procedure&#13;
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.&#13;
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.&#13;
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.&#13;
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.&#13;
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding). &#13;
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).&#13;
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables. &#13;
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.&#13;
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.&#13;
Figure 2&#13;
Inclusion of Other in the Self (IOS) scale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3509">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3510">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3511">
                <text>Wong07092022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3512">
                <text>Elisha Moreton&#13;
Aubrey Covill</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3513">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3514">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3515">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3516">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3517">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="174" public="1" featured="0">
    <fileContainer>
      <file fileId="180">
        <src>https://www.johnntowse.com/LUSTRE/files/original/bd93a21aa3361f315e8f432abca9fe74.csv</src>
        <authentication>50ca217126f8697be868e298f2a8a6d4</authentication>
      </file>
      <file fileId="181">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6d1b89ec7f03710b11bce66408332f90.pdf</src>
        <authentication>4a222c6141db92dc7ee55aa00fb0d0ce</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3518">
                <text>Does implicit mentalising involve the representation of others’ mental state content?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3519">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3520">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3521">
                <text>Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3522">
                <text>implicit mentalising, co-representation, joint action, domain specificity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3523">
                <text>Pre-test: Selection of Suitable Stimuli&#13;
Participants&#13;
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.&#13;
Stimuli and Materials&#13;
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.&#13;
Design and Procedure&#13;
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design. &#13;
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.&#13;
Main Experiment&#13;
Participants&#13;
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.&#13;
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52. &#13;
Stimuli and Materials&#13;
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians. &#13;
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).&#13;
Figure 1&#13;
Example of Stimuli Used in Simon Task &#13;
&#13;
Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.&#13;
Design and Procedure&#13;
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.&#13;
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.&#13;
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.&#13;
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.&#13;
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding). &#13;
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).&#13;
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables. &#13;
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.&#13;
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.&#13;
Figure 2&#13;
Inclusion of Other in the Self (IOS) scale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3524">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3525">
                <text>Data/Excel.csv&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3526">
                <text>Wong07092022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3527">
                <text>Aubrey Covill&#13;
Elisha Moreton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3528">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3529">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3530">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3531">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3532">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="175" public="1" featured="0">
    <collection collectionId="9">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="499">
                  <text>Behavioural observations</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="500">
                  <text>Project focusing on observation of behaviours.&#13;
Includes infant habituation studies</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3533">
                <text>A review of the PEACE interview model training and implementation in real-life interviews</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3534">
                <text>Jack Hardaker</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3535">
                <text>07/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3536">
                <text>Police officers in England and Wales are trained to conduct interviews in line with the PEACE model of interviewing, however, the level of implementation of the PEACE procedures can vary between organisations and over time. The present study aimed to review the quality of current PEACE model interviewing training and its implementation into interviewing practice. Initially, in Study One, 62 training feedback forms from the Cumbria police force were analysed using thematic analysis to gain an overview of the training’s strengths and weaknesses. In Study Two, 30 interviews from 10 officers trained on these courses were analysed, to see if reported intention to implement the PEACE model and techniques learnt during training were transferred into real-life interviewing practice. Data from Study One indicated that the course was satisfactorily structured and presented, with data from Study Two showing improvement for most Tier-2 interviewers interviewing abilities after training, though some interviewers failed to implement concepts and techniques covered on the training course. Potential explanations for these findings and ways to improve the transference of skills from interviewing training are discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3537">
                <text>PEACE model, Interviewing, Investigation, Interrogation, Training, Evaluation, Interviewing techniques, PEACE model training</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3538">
                <text>Study One&#13;
Method Participants All 62 participants undertook either a Tier-2 or Tier-3 interviewing course with the Cumbria police force. Participants were currently serving officers of constable rank or higher, of which, 34 were female and 28 were male. There was considerable variance in years of service between Tier-2 interviewers and Tier-3 interviewers, though no exact measure of years of service or age was included with the data provided. Materials Data The 62 training evaluation forms were provided to the researcher by the Cumbria police force, and were from either the Tier-2 investigative interviewing or the Tier-3 investigative interviewing course. The forms contained two scales indicating levels of confidence in conducting interviews before and after receiving the training, with a further four scales indicating levels of agreement with questions relevant to the study, and a single “Yes or No” question indicating if the participant was satisfied with the training received overall (see Appendix A for the full list of questions and exact wording). For all six scales, participants rated their strength of agreement with the statement using a scale of one to five (Likert, 1932). Three open questions were included on the form that stated: “If you have any other comments about this training please record them here”, “Are there any elements of the course did you not find useful or feel require further explanation?”, “If you have any other comments to make about this course please record them below.” Ethics Ethical approval was granted by a member of the Lancaster University Psychology department before data collection and analysis began. Data was collected by the Cumbria police force with all participants consenting to complete the feedback forms with the knowledge that their comments would be evaluated to improve the training courses. All course evaluation forms were reviewed by the researcher in a secure location at Cumbria police force headquarters, with findings being stored on the secure Lancaster University OneDrive system. No information that could allow an individual to be personally identified has been included in this report. &#13;
Study Two&#13;
Participants Five interviewers who had undertaken the Tier-2 interview training course with the Cumbria police force and five interviewers who had undertaken the Tier-3 interview training course with the Cumbria police force were randomly selected from the sample of 62 officers who had completed the evaluation forms used in Study One. At the time of writing, no officer had undertaken further training than the course ascribed to them. Six officers were female with four being male. As in Study One, no age data was available to record. On average Tier2 trained interviewers had 2.6 years of interviewing experience (SD = 0.8) with a range of two to four years of experience, whilst Tier-3 trained interviewers had 6.4 years of interviewing experience (SD = 3.93), with a considerably wider range of between three and 14 years of experience. Materials Data Thirty interview videos were reviewed by the researcher, three from each interviewer with one interview being before training, one being as close as possible after training and one being the most recent interview that the interviewer had conducted. Of these interviews, only two were conducted with victims and 28 were conducted with suspects, with both victim interviews being conducted by Tier-3 officers. Interviews covered a wide range of offences, with eight counts of assault, three shoplifting, two of burglary, two of possession of illegal drugs, two of criminal damage, one of resisting arrest, seven of sexual assault, six of rape, and one accessory to murder. Tier-2 interviews on average lasted 21 minutes (SD = 12.29) with the shortest being only five minutes and the longest being 52 minutes, whilst Tier-3 interviews lasted on average 56 minutes (SD = 18.82) with the shortest being 18 minutes and the longest being 86 minutes. Tier-2 interviewers’ most recent interview was on average 275.4 days (SD = 182.69) after training, and the closest interview to their training date with on average 52.2 days (SD = 41.33) after completing the training. Tier-3 interviewers’ most recent interview was on average 340.2 days (SD = 64.39) after training, and the closest interview to their training date with on average 36.8 days (SD = 21.07) after completing the training. Procedure The interview footage was provided by the Cumbria police force on a secure internet system only accessible from the Cumbria police station (the researcher took anonymised notes, and no video recordings or other personally identifiable information left the secure system). From the available interview recordings, footage was selected to be as close as possible to before and after the interviewer’s training date, as well as the most recent interview where the interviewer acted as the lead or sole interviewer. These were used to ensure the recordings gave a clear indication of pre-training ability, immediate post-training ability, and to see if training abilities were improved by the interviewing courses—as well as to check if these improvements continued after a long period since the training. Notes were subsequently coded into four categories for adherence to the PEACE model and techniques were tallied whenever used; 1) examples of preparation, 2) establishment of rapport, 3) appropriate use of the account, clarify and challenge phase and 4) the inclusion of a closure phase. The evaluation phase (where interviewers are given feedback on their performance) of the PEACE model wasn’t included in this study, as this process wasn’t included in the footage of the interviews. The development of the categories and the categorisation of behaviours was informed by the PEACE model training research by Hall (1997) and Clarke and Milne (2001). Examples of preparation included behaviours such as highlighting new information that did not refer to notes or inference, preparation of questions and a clear understanding of the interviewee’s circumstances and case. The establishment of rapport was noted when interviewers used jokes or friendly language, open and trusting body language (eye contact, open posture, mirroring of behaviour, Sandler &amp; Lillo-Martin, 2006), or showed concern or interest in the interviewees’ needs, such as asking if they needed refreshments or asking how they felt. Appropriate use of the account, clarify and challenge phase was categorised by interviewers allowing the interviewee time to give an account (following the 80-20 rule of conversation management, Shepherd, 2007), clarifying unclear statements through summarising or re-asking questions, and asking questions which challenged accounts given by the interviewee. The inclusion of a closure phase was noted by the use of summarising accounts at the end of an interview, explaining what will happen after the interview concludes and giving the interviewee time to ask questions or provide further comments. The use of techniques mentioned on the evaluation forms as being taught and as seen on the courses syllabuses were recorded. These techniques were the use of the SER3 notetaking system, the use of silence, the use of a second interviewer, the use of open-ended questions, bad character warnings and special warnings. The counts for both adherence to the PEACE model and techniques utilised were subsequently tallied and compared between Tier2 and Tier-3 interviewers. Obtainment of a confession was not recorded in the data, as interviewees often enter an interview knowing if they intend to confess or not (Milne &amp; Bull, 1999), and interviews repeatedly stifled by “No comment” responses would incorrectly be reported as failures. Ethics Ethical approval was granted by a member of the Lancaster University Psychology Department’s ethical committee and was approved by the Cumbria police force.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3539">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3540">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3541">
                <text>Hardaker2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3542">
                <text>Donavan Cheung</text>
              </elementText>
              <elementText elementTextId="3543">
                <text>Mert Kaplanoglu</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3544">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3545">
                <text>N/A</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3546">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3547">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3548">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3549">
                <text>Sophie Nightingale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3550">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3551">
                <text>Forensic</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3552">
                <text>Study One: N = 62, Study Two: N = 10</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3553">
                <text>Power analysis</text>
              </elementText>
              <elementText elementTextId="3554">
                <text>Qualitative (Thematic Anlaysis) </text>
              </elementText>
              <elementText elementTextId="3555">
                <text>T-Test</text>
              </elementText>
              <elementText elementTextId="3556">
                <text>Other</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="176" public="1" featured="0">
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3557">
                <text>An investigation of the influence of individual differences on susceptibility to product placement</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3558">
                <text>Ellen Dimeck</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3559">
                <text>14/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3560">
                <text>Product placement increased in popularity in 1982 when Reese’s Pieces Chocolate was included in E.T. the film, which led to a 65% increase in sales. Still to this day product placement is omnipresent within our cultural climate and research has supported that it enhances our purchase intentions. However, what remains unknown is how individual differences may influence product placement susceptibility. To address this gap, the current study investigated whether individual differences in cognitive capabilities, inhibitory control, age, familiarity, gender and timepoint enhance/reduce the likelihood of individuals' purchasing intentions being influenced by product placement. To do this, 55 participants (23 younger adults (Mage = 61.62(8.70)) and 22 older adults (Mage = 21.75(0.68)) were presented with images of four cups of coffee and asked to rank their purchase intentions/familiarity with the products. Following this, participants watched three scenes from Coronation Street, with the second clip including a product placement (Costa Coffee). Approximately 48 hours later, participants completed another purchase intentions questionnaire on the same four cups of coffee. The results highlighted that purchase intentions increased immediately post-clip; however they decreased 48 hours post-clip. Therefore, advertisers may use this information to discover ways in which the consumer can easily purchase the product immediately post-clip e.g. through QR codes. In regard to all other variables, no other significant relationships were found. Thus, it cannot be suggested to advertising agencies that product placement targeted to individuals who fulfil a given criteria (e.g. older adults, etc) will achieve optimal results when compared to non-targeted product placement.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3561">
                <text>Marketing, Product placement, Individual differences, Cognitive capabilities, Inhibitory control, Age, Familiarity, Gender, Purchase intentions.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3562">
                <text>Method Design The present quantitative study adopted a repeated measures design. There were several predictor variables: overall cognitive capabilities (including executive functioning; as assessed by the ACE-III; Hsieh et al., 2013), inhibitory control (as assessed by the Stroop effect), age, familiarity, gender, and timepoint. The dependent variable was susceptibility to product placement as measured by change in purchase intention. Participants At the time of the current studies design no published studies had investigated the influence of individual differences on product placement susceptibility, therefore the required sample size was modelled on the most comparable study the authors could source. Specifically, Hoek et al. (2022) investigated the influence of inhibitory control on advertising literacy activation and advertising susceptibility. Hoek et al. (2022) recruited 57 participants. Given the time restraints of data collection, the authors elected to recruit 55 participants. A total of 55 participants volunteered to participate in part one of the study. All participants were recruited via opportunity sampling through word of mouth and through advertisements placed on various Lancaster University Facebook pages (e.g. the Perception and Action Lab group). Participants were either aged between 18-25 (younger adults) or aged 50 and over (older adults). Out of the 55 participants, there were 27 younger adults (19 women; Mage = 60.93; SDage = 8.26) and 28 older adults (18 women; Mage = 21.78; SDage = 0.85). No participant had a known diagnosis of a psychiatric, neurological, or visual impairment, thus psychiatric, neurological, and visual impairments were not included in the analysis. All participants were White British/Irish. Therefore, there was no variation between ethnicities, thus ethnicity was not included in the analysis either. Given that cognitive capabilities was a key predictor variable within this study, it was necessary to ensure that participants with a known cognitive impairment or probable indication of cognitive impairment were removed from the study. Subsequently, all participants were screened for the probable presence of mild cognitive impairment through the Addenbrookes Cognitive Examination (ACE-III; Hsieh et al., 2013). After applying the pre-validated cut off point, 10 participants were excluded. Therefore, 45 participants were included in the analysis. Participants were either aged between 18-25 (younger adults) or aged 50 and over (older adults). Out of the 45 participants, there were 23 younger adults (16 women; Mage = 61.62; SDage = 8.70) and 22 older adults (17 women; Mage = 21.75; SDage = 0.68). Materials Inhibitory Control Inhibitory control was measured through an online Stroop task developed and run through Psytoolkit (Stoet, 2010, 2017). Completion of this task required participants to ignore the meaning of the colour word and indicate the print colour. Participants were generally presented with a colour word and a print word that were incongruent to one another. Thus, participants needed to use their ability to inhibit a pertinent response (i.e. the print colour) and indicate the print colour, which would be done more efficiently by competent readers (von Hippel &amp;amp; Gonsalkorale, 2005). Previous scholars have chosen to use the Stroop task, as it offers a good measure of individual variation in inhibition (e.g., Long &amp;amp; Prat, 2002). As this study was conducted remotely, via Microsoft Teams share screen function, participants were asked to verbally indicate the print colour and the researcher pressed the related keys (e.g. r for red, g for green, b for blue, and y for yellow). Participants first completed four practice trials followed by 40 test trials. Cognitive Functioning Cognitive capabilities were measured using an adaptation of the Addenbrooke’s Cognitive Examination (ACE-III; Hsieh et al., 2013). The original version assesses the participants’ attention, memory, fluency, language, and visuospatial abilities and has a combined score of 100. Although the adapted version examines the same five cognitive domains, it has a combined score of 77, the reason being that some questions were removed, as they were not deemed suitable for an online study – the first two questions on attention, the first two questions on language, and the first three questions on visuospatial abilities. The original version's pre-validated cut off point was 88 (88%) and therefore the adapted version's was 68 (88.31%). The participants who scored below the pre-validated cut off point were removed prior to analysis to ensure that the presence of cognitive impairment would not confound the subsequent analysis. Demographic and Health Characteristics Demographic information, including age, ethnicity, and gender, and background health information, including whether the participant had a current or history of a diagnosis of any cognitive, neurological, visual, or psychiatric impairments, was collected through an online Qualtrics Questionnaire. Purchase Intentions Questionnaire Prior to the questionnaire, participants were presented with the name and an image of each of the four cups of coffee. Purchase intentions of the four cups of coffee were then measured using a 7-point Likert scale. Participants were asked to rate on a scale of 1-7, 1 being ‘Extremely unlikely’ to 7 being ‘Extremely likely’, how likely they were to purchase a cup of coffee from: Caffè Nero, Costa Coffee, Greggs, and Starbucks. Comparably, familiarity was also measured using a 7-point Likert scale. Participants were asked to rate on a scale of 1-7, 1 being ‘Extremely unfamiliar’ to 7 being ‘Extremely familiar’ with how familiar they were with each cup of coffee from Caffè Nero, Costa Coffee, Greggs, and Starbucks. Purchase intentions and familiarity were measured using a 7-point Likert scale, rather than the commonly used 5-point Likert scale, as the inclusion of several options enhances the likelihood of acquiring a more accurate response (Joshi et al., 2015). It was important that purchase intention and familiarity of Costa Coffee was assessed alongside alternative brands, so that it was not made apparent that the study was focusing upon the participants' purchase intention ranking of Costa Coffee only. Therefore, Caffè Nero, Greggs, and Starbucks were chosen alongside Costa Coffee, because according to a survey conducted by Lock (2022), they are the UK’s top four leading coffee shop chains. The images were provided by Adobe Stock (2019) and Dreams Time (2019a, 2019b, 2019c). Product Placement Video The British TV Soap Coronation Street was selected, as prior research (e.g. Armstrong, 2018) suggests that it is popular amongst both younger and older adults (YouGov, 2011). The first clip chosen was a scene from 8th January 2018 Part 1, lasting 1 minute 16 seconds. The second clip chosen was a scene from 29th January 2018 Part 1, lasting 1 minute 15 seconds. The third clip chosen was a scene from 7th February 2018 Part 2, lasting 1 minute 23 seconds. It was the second scene shown that included the product placement (Costa Coffee). The researcher screen recorded each clip from https://www.dailymotion.com/gb and saved them into an encrypted file on a password-protected computer. Procedure A member of the psychology department research ethics committee approved the study before it was undertaken. Participants were invited to attend a 40–50-minute online Microsoft Teams meeting on a set date and time agreed on by the participant and the researcher. To commence, the researcher shared their screen and aided the participant in reading the participant information sheet and consent form via an online Qualtrics Questionnaire. At this time, participants were informed of their right to withdraw up to 2 weeks after participating without giving any reason and they were told their personal information would remain confidential and would be stored in encrypted files (that only myself and my supervisor have access to) on password-protected computers. The participants were only able to progress into the study on attainment of verbal consent. Participants were then asked to disclose various demographic characteristics (e.g., age and gender) and details relating to their current health status (e.g., any cognitive or visual impairments). The participants were then presented with four images of cups of coffee from Caffè Nero, Costa Coffee, Greggs, and Starbucks. They were then asked to rank their purchase intentions and familiarity, on a seven-point Likert scale, with these products via an online Qualtrics Questionnaire. Following this, participants were asked to watch three short scenes from Coronation Street. The second clip shown included a product placement of Costa Coffee. The researcher then implemented an online Stroop task using Psytoolkit (Stoet, 2010, 2017). The participants were also screened for the presence of mild cognitive impairments through the ACE. After this, the participants were presented with the same four images and asked to rank their purchase intentions of these products via the online Qualtrics Questionnaire (see Figure 1). Approximately 48 hours after completing the first part of the study, participants were sent an email invitation to complete another online Qualtrics Questionnaire. Participants were first asked to provide their participation number, which could be found in the email. They were then shown the same four images of cups of coffee and asked to rank their purchase intentions. Finally, the participants were provided with a debrief form at the end of the online Qualtrics Questionnaire (see Figure 2). This debrief disclosed the small degree of deception involved. Specifically, it was explained that participants were not informed at the start that the study considered product placement, as this might have influenced the subsequent data. Participants were reminded that they had the right to withdraw up to 2 weeks after participating and were provided with contact details in case they had any questions. The participants' purchase intentions of the four cups of coffee were measured three times throughout the course of the two studies: pre-clip, immediately post-clip, and 48 hours post-clip. This was to see whether the participants' purchase intentions for the four cups of coffee, specifically Costa Coffee, had increased or decreased following the product placement clip and whether their ranking would withstand the test of time (48 hours post-clip). This is why the participant were asked to include their participant number in part two, so that the participants prolonged purchase intention (48 hours post-clip) could be traced back to their earlier purchase intention rankings (pre-clip and immediately post-clip). Figure 1. A flowchart of part one tasks. Figure 2. A flowchart of part two tasks. Data Processing Inhibitory Control Participants raw Stroop data were downloaded from Psytoolkit into a Microsoft Excel file and saved in an encrypted files on a password-protected computer. From this raw data Stroop effect (the average incompatible conditions response time (ms) - compatible conditions response time (ms)) and percentage error rate (which involved adding the total of incorrect and timed out responses and dividing it by 40 (number of trials)) were calculated. Stroop effect and percentage error rate were used as an indicator of the participants inhibitory control capabilities. Specifically, a high Stroop effect would suggest less difficulty in inhibiting interference and a higher error rate would suggest reduced inhibitory capabilities. Cognitive Functioning The scores of the ACE-III were added and entered into the Microsoft Excel file, which was saved in an encrypted files on a password-protected computer. A higher score was indicative of superior cognitive functioning. Demographic and Health Characteristics To ensure all demographic and health data was readable by R-Studio all variables were dummy coded using numerical values. So, for instance, to determine the participants' gender, they were asked ‘What gender do you identify’ and given the option to choose from one of several responses. Each response was allocated a number, for example, 1 = Man, 2 = Woman, etc, and this was entered into the Microsoft Excel document. Susceptibility to product Placement (change in Purchase Intentions) To investigate the susceptibility to product placement, two difference in purchasing behaviour score were calculated (one for short duration, one for prolonged duration). To calculate these values, the likelihood of purchasing the product value prior to watching the clip was subtracted from likelihood of purchasing the product value after watching the clip (either immediately post-clip or 48 hours after). A positive difference meant that purchase intentions had increased following placement clip. A negative difference meant that purchase intentions had decreased following placement clip. A difference of zero meant that the placement clip had failed to alter purchase intentions Familiarity The familiarity ratings of Costa Coffee were entered into the Microsoft Excel file, which was saved in an encrypted files on a password-protected computer. The higher the score, the more familiar the participant was with the product. Data Analysis To analyse the data, a linear mixed effects model was chosen. The reason being that the current study employs a repeated measures design, and a linear mixed effects model permits an analysis of hierarchically structured data (Baayen et al., 2008).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3563">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3564">
                <text>Data/RStudio.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3565">
                <text>Dimeck2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3566">
                <text>Reece Graham</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3567">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3568">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3569">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3570">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3571">
                <text>Lancaster </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="177" public="1" featured="0">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3572">
                <text>Implicit Hand Representations in Typical Ageing and in Parkinson's Disease</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3573">
                <text>Cati Oates</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3574">
                <text>16 September 2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3575">
                <text>Having an internal representation of one’s own body is important for many interactions with the environment, and in making decisions about what actions we are capable of performing. However, even in healthy adults, these representations are known to be distorted. In the hand specifically, individuals are likely to underestimate the length of all fingers, but overestimate the distance between each adjacent pair of knuckles. Both healthy ageing and Parkinson’s Disease (PD) include apsects which are known to further distort body representations, including, but not limited to, diminished tactile sensitivity and impaired action capabilities. This study was designed to investigate the accuracy of hand representations in typical ageing and in PD. Fourteen participants with mild to moderate PD, 17 healthy age-matched controls and 20 younger controls made estimates about the location of hand landmarks when the hand was hidden from view. Estimations were compared with actual hand size. Older controls and individuals with PD both demonstrated more accurate representations of thumb length, and of distance between the index and middle knuckles than younger controls, with older controls also showing differences in their perception of distance between thumb and index knuckles. However, no differences were found between the PD group and older controls, suggesting that the formation of body representations is an ability which is preserved in PD. Possible explanations for, and implications of these results are discussed.  &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3576">
                <text>LUSTRE, aquisition form, wordpress</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3577">
                <text>Participants						    					    To determine the number of participants necessary, a priori power analysis was conducted in G*Power (Faul et al., 2009), using α= 0.05, β= .08 and effect size = 0.32. This effect size was calculated from Longo (2014), which employed a similar methodology. The analysis determined that 10 participants in each condition were required to yield sufficient power. Previous studies using this methodology have included sample sizes ranging from 12-22 participants (Longo &amp; Haggard, 2010, 2012; Peviani &amp; Bottani, 2020). The intended sample size, therefore, was 20 participants per condition. Due to the time constraints of the study, this number was not reached for all conditions, but all conditions included more than the 10 participants needed as suggested by the priori analysis.											                  20 younger controls were tested (15 female). Their ages ranged from 19 to 30 (M = 22.40 yrs, SD = 2.21 yrs). 17 were right-handed, and 3 were left-handed, with handedness ranging from -89.5 to 100 (M = 64.52, SD = 61.84) on the Edinburgh Handedness Inventory (EHI; Oldfield, 1971). 17 healthy older controls were tested (11 female). Their ages ranged from 52 to 79 (M = 66.12 yrs, SD = 9.16 yrs). 14 were right-handed, and 3 were left-handed, with handedness scores ranging from -100 to 100 (M = 65.29, SD = 77.31). 14 individuals with PD were tested (4 female). Their ages ranged from 54 to 78 (M = 65.93 yrs, SD = 8.43 yrs). All PD participants were right-handed, with handedness scores ranging from 33.5 to 100 (M = 88.31, SD = 21.20). There was no significant difference between the ages of the participants in the typically ageing and the PD condition, t(29) = 0.06, p = .95. 																								  For the PD participants, the most recent onset of PD was 3 years ago, with the longest diagnosis of 20 years (M = 7.75 yrs, SD = 4.81 yrs). All presented with a Hoehn and Yahr Stage of 3 or below. This indicated that all participants were physically independent. All participants had been prescribed antiparkinsonian medication, and they were all tested under their normal medication regime.   							   					        Younger controls were recruited through use of social media and personal connections of the researcher. PD participants were recruited through a Parkinson’s Research Interest Database which was developed by the researcher’s supervisor (Dr Megan Readman), and by contacting a local branch of Parkinson’s UK. Older controls were primarily friends and family of PD participants.       					 						      Materials      									                 	    24 hours before testing, participants were asked to submit demographic information in a questionnaire created using the design software Qualtrics (Qualtrics, Provo, UT).       						               					               Participants’ hand movements were recorded by an Xbox Kinect camera, mounted on the ceiling directly above the hand. The camera had a resolution of 640x480 pixels, and a frame rate of 30 captures per second. The recording was made using the Kinect Studio application. Within the frame of the recording, a 30cm ruler was placed, to allow for conversion of pixels to centimetres during analysis.            																						            During the experiment, the board used to hide the participants’ hands from view was a piece of black cardboard, approximately 85x60cm. The board was 2mm thick and completely opaque. The board was positioned approximately 10cm above the hand, and was supported in this position by 5 cylindrical weights (one under each corner of the board, and one placed centrally). At each side of the board was a small mark of duct tape. This was to indicate where the participants should point between each trial. A mark was placed on each side of the board, as the handedness of the participant determined which hand they used during testing, and therefore determined which side of the board was easier to point to. Participants were asked to point using a red straw, approximately 10cm long. 																					    All participants completed the EHI (Oldfield, 1971). This includes a list of tasks (for example, writing or striking a match), for which the participant must indicate which hand they prefer to use. The response options include a strong or slight preference for the right or left hand, or no preference. A score of 100 indicates pure right-handedness, while a score of 100 indicates pure left-handedness.           												 Participants in all conditions were screened for cognitive impairments using the Addenbrookes Cognitive Examination (ACE-III; Hodges &amp; Larner, 2017). This assessment included 19 tasks which examine cognitive function on 5 separate domains; attention (e.g.  ‘count down from 100 in 7’s’), memory (e.g. ‘remember this name and address’), fluency (e.g. ‘name as many animals as you can in one minute’), language (e.g. ‘write two full sentences’) and visuospatial reasoning (e.g. ‘draw a clock which reads 10 past 5). Typically, a score of less than 87 out of 100 would be considered abnormal, however, as some aspects of the ACE-III require participants to perform motor tasks, it is accepted that the best cut-off score to identify cognitive impairment in Parkinson’s is 80 points (Kaszás et al., 2012). Using this assessment as an exclusion criterion, only 1 PD participant’s data was removed from further analysis. There was no significant difference in the ACE-III scores of the remaining participants between the three conditions, F(2, 48) = 2.10, p = .13.         	   														 Participants in the PD condition were also assessed using the Movement Disorder  Society- Unified Parkinson’s Disease Rating Scale (MDS-UPDRS; Goetz et al., 2008), to determine the severity of PD symptoms at the time of testing. The UPDRS assesses both the motor and the non-motor symptoms of PD. The non-motor assessment involves questions about the individual’s experience of symptoms during the past week, for example how well they are sleeping, and if they are experiencing tremors regularly. A motor assessment is also conducted, with the participants performing tasks such as opening and closing their hand as quickly as possible, and walking from one side of the room to the other. The researcher was also required to make judgements about the severity of typical PD symptoms such as tremors and rigidity present throughout the examination. All questions and tasks are scored on a scale of 0 to 4, with 0 indicating no impairment, and 4 indicating severe impairment. This assessment has previously been validated and determined to be a reliable indicator of the severity of PD symptoms at the time of testing (Gallagher et al., 2012; Martinez-Martin et al., 2013).         					   						          Testing occurred in the action and perception lab in the Whewell building at Lancaster University. This study received ethical approval from the Ethics Department of Lancaster University.      												     Procedure          										 Participants were emailed an information sheet 24 hours in advance to inform them of the requirements of the study. This email also directed them to a Qualtrics survey, where they were asked to submit their demographic information (age and sex). Here, they also completed the EHI, and were asked to confirm that they had normal or corrected-to-normal vision.        							              							   On the day of testing, participants were first screened for cognitive impairment using the ACE-III. At this point PD participants also completed the full MDS-UPDRS.           									        							      After the recording had started, participants were asked to place their dominant hand (as determined by the EHI) on the table in front of them. They were asked to move their chair so that their hand was aligned with the middle of their body. The participants were instructed to not move their hand throughout the experiment, before an occluding board was placed so that the participants could no longer see their hand. They were asked verbally to confirm that this was the case. Participants were given a straw to use as a baton with which to point. They were then directed to use the straw to point on the board, directly above where they believed specific locations of the hand to be. 10 different locations were used: the tips of each finger, and the knuckle where each finger meets the palm of the hand. Small duct tape marks were placed on the knuckles of each finger. This was done both to ensure that the participants were clear about which knuckles were intended, and also so that location of the knuckle would be clearer on the recording. The location for each trial was read aloud by the experimenter. Between each trial, participants were asked to move the straw to point at a duct tape mark on the side of the board. This was to ensure that all estimations were made where participants believed their hand to be, instead of them using alternative methods such as measuring where they believe one location to be based on the previous location. One block of testing consisted of 10 trials (one trial for each hand landmark).     	   					                 						  For the younger control condition, participants were directed to each landmark 10  times, meaning that data were obtained over 10 blocks. However, testing of the first PD participant determined that asking participants in this condition to complete all 10 blocks was not a viable option. Individuals with PD suffer from motor fatigue ability (Fabbrini et al., 2013) and multiple repetitive tasks led to an increased severity of PD symptoms such as tremors. For these reasons, all subsequent participants only completed 5 blocks of 10 trials each. This ensured we still had 5 estimations for each landmark, without causing distress to participants.        																		               Two different random orders were created for the presentation of the locations, and these were randomly assigned to participants.       																			              After testing, the occluding board was removed so that the recording could be used to ensure that the hand had not significantly moved throughout the testing period, before the recording was ended.        											              Data Analysis       									                  To determine both the actual and estimated locations of the hands, the recordings were replayed using the Kinect Studio software. For each trial, the footage was paused when the participant had the straw pointed at the estimated location. The cursor was then moved to this point, and the x and y coordinates of the cursor was manually inputted into a spreadsheet. The same method was used to determine the actual position of each hand while the occluding board was not in place.           						                					                The beginning and end of each recording was examined to confirm that the hand had not moved between the start and the end of the experiment. It was often the case that although the hand had not moved in any significant way, there was a couple of pixels difference in the position of a few landmarks. For this reason, the x and y coordinates of the hand position was recorded both before the board was placed, and after it was removed, and the average of these locations was used.            												  For analysis, we were interested in the overestimation of the length of each finger and of the distance between each pair of adjacent knuckles. To calculate the length of each finger, the difference between the x coordinates of the tip and knuckle of the finger was calculated, and the same was done for the y coordinates. Pythagoras’s theorem was then employed to  determine the distance, leading to the following formula:         	   	     																  The same formula was adapted to determine the distance between each pair of knuckles.          	  					          							  These distances were calculated for each block of 10 trials, and then the average was taken for each participant, before being compared to the actual measurements to calculate the percentage overestimation of each distance.        											              									    For the detection of outliers, all estimations were plotted using RStudio. Code was adapted from Helbing (2020) to plot an ellipse for each hand location per participant, which encompassed at least 80% of all data points. Estimations outside these ellipses were treated as outliers and removed from further analysis. Setting the inclusion of data points to 80% meant that even for older participants, who only performed 5 trials per location, it was still possible for outliers to be seen outside of the ellipse. RStudio did not have the capacity to plot 10 separate ellipses at once, therefore 2 separate plots had to be made per participant. Before analysis, hand maps were also created using RStudio. Although these plots were not used for analysis, they helped to visualise the data. All hand maps can be found in the Appendices.   </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3578">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3579">
                <text>Excel/xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3580">
                <text>Oates2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3581">
                <text>Eleanor Bater</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3582">
                <text>Open </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3583">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3584">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3585">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3586">
                <text>LA1 4YT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3587">
                <text>Megan Readman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3588">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3589">
                <text>Clinical&#13;
Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3590">
                <text>51</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3591">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="178" public="1" featured="0">
    <fileContainer>
      <file fileId="182">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e32fd4b78a04d80fbd7ab6a2265e4e18.doc</src>
        <authentication>256ef86491c441698bcfc5ee5229e045</authentication>
      </file>
      <file fileId="183">
        <src>https://www.johnntowse.com/LUSTRE/files/original/9afa665c5da6819a6abbbd58e667a644.doc</src>
        <authentication>631db54d3ce3200d45c2ae79bd707902</authentication>
      </file>
      <file fileId="184">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f30fb576987e0b467d3513fc694e2664.doc</src>
        <authentication>d3124e1b6d8b9a680f5536ce571b8ff8</authentication>
      </file>
      <file fileId="185">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c64d5e32836a09411d6c54f360aa5150.zip</src>
        <authentication>f935452bb7362ea34c7bdbf534f2ae95</authentication>
      </file>
      <file fileId="186">
        <src>https://www.johnntowse.com/LUSTRE/files/original/74c9cface8f2b419890e4a59d9efcb50.txt</src>
        <authentication>2ac6d69af18034d15f184bd0296aca8e</authentication>
      </file>
      <file fileId="187">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e5da75832ace5a7b42ae41c6efb607d0.zip</src>
        <authentication>de3e38ff5089cfbc98b74521d27b8b28</authentication>
      </file>
    </fileContainer>
    <collection collectionId="10">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="819">
                  <text>Interviews</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3592">
                <text>An exploration of the psychological mechanism and effectiveness behind the co-creation process in advertising, based on the ‘Co-create by Sharp’ method. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3593">
                <text>Maria Gabriela Vivero Donoso</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3594">
                <text>06/09/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3595">
                <text>Scholars have referred to co-creation as the interaction between firms and stakeholders to create value. Co-creation for product innovation and campaign delivery is a growing trend in today’s competitive market due to a demand for consumer-centric communication and product development strategies. Even when traditional research techniques are relevant for evaluating brand messaging, they are considered backwards looking. Traditional research techniques (e.g., interviews, questionnaires, focus groups) rely on companies’ terms rather than the customer’s domain, limiting stakeholders to only react to market offers instead of cooperating to build them. The application of co-creation techniques does not replace reactive research but is the new next step to building brand and campaign strategies. &#13;
The Sharp Agency developed ‘Co-create by Sharp’, a co-creation methodology that aims to build campaign and brand strategies with a higher value of insight than other approaches. According to The Sharp Agency, their unique approach to co-creating ideas with stakeholders has demonstrated effectiveness in their clients’ performance (i.e., 400% of revenue increase, 33% growth speed, and 19% spending increase). However, the method lacks information that supports its efficacy, more specifically, an exploration of the perceptions of people involved in their co-creation methodology (i.e., co-creation participants, Sharp team members, and Sharp’s commissioning clients). &#13;
This report aims to identify the presence of plausible psychological theories that could support the ‘Co-create by Sharp’ methodology. Accordingly, this study intends to explore the dynamics, perceived effectiveness, and limitations of the ‘Co-create by Sharp’ methodology through a series of individual interviews with the people involved in the process. &#13;
The researcher worked as an intern in the Sharp Agency, and a qualitative experimental design was used to investigate the research objective. Three types of interviews were conducted to understand the ‘Co-create by Sharp’ process from its main perspectives: Co-creation participants, Sharp team members, and Sharp’s commissioning clients. &#13;
Findings indicated that the ‘Co-create by Sharp’ method is supported by a specific psychological mechanism explained by Self-Determination and Implicit Self-esteem theories. Based on these theories, interviewees’ perceptions of co-creation suggest that the &#13;
‘Co-create by Sharp’ methodology generates participant engagement in brand co-creation. According to the literature reviewed, participant engagement increases the level of insight in co-creation outcomes. As a result, this report has determined that the ‘Co-create by Sharp’ methodology produces a chain of benefits that begins with psychological benefits and brand-self connection, resulting in higher campaign delivery effectiveness. &#13;
In conclusion, the ‘Co-create by Sharp’ methodology appears to be supported by a psychological mechanism that motivates participants to co-create in developing campaign strategies and brand building. Moreover, co-creating with stakeholders as a next step to gathering data with market research techniques could increase customer value in campaign delivery. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3596">
                <text>Keywords:&#13;
&#13;
Co-creation,  advertising, psychology, behaviour</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3597">
                <text>The researcher worked as an intern in the Sharp Agency to better understand the company’s way of work and the ‘Co-create by Sharp’ method. The internship allowed the researcher to be involved in various steps of the co-creation method:&#13;
1) Attend co-create sessions and observe participant behaviour (see Appendix K and Appendix L).&#13;
2) Develop post-co-create decks of information, including sessions’ outputs.&#13;
3) Participate in strategic brainstorming sessions.&#13;
4) Collate evidence of the final results of messaging and visuals for campaign delivery. &#13;
A qualitative experimental design was used to investigate the research objectives and provide answers to research questions. Three types of interviews were conducted with different participant profiles, including co-creation participants, Sharp team members and Sharp clients. Interview sessions lasted between twenty and thirty minutes, using a pre-determined discussion guide for each interview and received ethical approval. Interviews were designed to gather insights about co-creation perceptions from every person involved in the process.&#13;
A qualitative design allowed interviewees to express freely their co-creation experience with The Sharp Agency. Considering the research aimed to explore people’s attitudes, it would not have been appropriate to use a quantitative method. Instead, a qualitative design allowed for gathering a spectrum of people’s observations and feedback.&#13;
&#13;
Sampling&#13;
Representative sampling was used to obtain results that reflect each participant’s profile perspective. Interviewing involved five participants from the latest co-creation sessions moderated by Sharp, seven Sharp team members with roles involved in different stages of the co-creation process (including founders of the ‘Co-create by Sharp’ method), and three company commissioning clients that represent market leader companies (i.e., Medical Protection Service, Barbour ABI, and Lonza).&#13;
Considering that Medical Protection Service (MPS) and Lonza are part of the healthcare industry and Barbour ABI provides data and intelligence to the construction industry, these companies manage technical language and require higher accuracy of message delivery. (Ekiyor &amp; Altan, 2021; Mokhtariani et al., 2017).&#13;
This project received ethical approval under the auspices of the Lancaster University Psychology Department (see Appendix M). Participants gave informed consent using a consent form sent and signed via e-mail (see Appendix B). Participants were additionally provided with a debrief sheet, including contact details, should they have further questions (see Appendix C).&#13;
&#13;
Materials&#13;
Interviews were regulated using three discussion guides (see Appendix E, Appendix F, and Appendix G). These were devised based on the objectives of the investigation set collectively with Sharp. Each discussion guide responded to a specific question based on participants’ profiles (co-creation participants, Sharp team members, and commissioning clients. Participants were encouraged to elaborate on their answers as much as possible. When conducted virtually, interviews were recorded using the current version of Microsoft Teams, and in person, interviews were recorded using Apple’s Voice Memo app.&#13;
&#13;
Research Procedure&#13;
Participants were introduced to the researcher by The Sharp Agency and invited to participate in a scheduled interview via Microsoft Teams or in Sharp’s headquarters in the case of Sharp team member participants. The interviewer followed a discussion guide that began with questions that allowed participants to introduce themselves and warm up to the conversation. It concluded with questions that aimed to obtain the most robust responses. For further analysis purposes, interviews were transcribed using the Otter.ai software. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3598">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3599">
                <text>Word.doc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3600">
                <text>Donoso2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3601">
                <text>Madie Lulek</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3602">
                <text>Open </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3603">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3604">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3605">
                <text>Qualitative Data </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3606">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3607">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3608">
                <text>Masters</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3609">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3610">
                <text>3 commissioning clients, 5 co-creation participants, and 7 Sharp team members</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3611">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="179" public="1" featured="0">
    <fileContainer>
      <file fileId="188">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6d20e4d1e492485766537bc65023ff1d.csv</src>
        <authentication>fc464b4956692e558cf73d0dac2825c0</authentication>
      </file>
      <file fileId="189">
        <src>https://www.johnntowse.com/LUSTRE/files/original/92a8d43f4a29aa993e26ae5ebccfccab.csv</src>
        <authentication>d60890e730eb0bbec9a7f8bdc0eda7d3</authentication>
      </file>
      <file fileId="192">
        <src>https://www.johnntowse.com/LUSTRE/files/original/b4e318b0ff40205dddb2e27a77319608.pdf</src>
        <authentication>15ac31078692a6a822b1e06dfab1c670</authentication>
      </file>
      <file fileId="193">
        <src>https://www.johnntowse.com/LUSTRE/files/original/cbbeddddc81f7b965d506800abffce2f.pdf</src>
        <authentication>4a371fd6b1e3934f109efa94739a594c</authentication>
      </file>
      <file fileId="194">
        <src>https://www.johnntowse.com/LUSTRE/files/original/590ec6b7290dc81518e7712aadc3652b.pdf</src>
        <authentication>c7a6bf799aa2440a9ea4f1493f2201f9</authentication>
      </file>
      <file fileId="200">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2ff293badc8b69d085fc0772f35ed5dd.pdf</src>
        <authentication>68da4579ceea5dc91732edef31c61a16</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3612">
                <text>Han-Yi Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3613">
                <text>03/Sep/2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3614">
                <text>Inner speech is a cognitive function related to language processes. Based on its functions reflecting information processing and memorising, it may link to the purchasing process, which includes searching and evaluating product information. Inner speech may also help people think and imagine using the product in the future during their purchasing process.&#13;
This study discussed and investigated the role of inner speech in the purchasing process and how it might affect the decision-making time. This study also mentioned how inner speech may be identified and suppressed. Participants’ data was collected through experiments and several questionnaires. The findings indicated that inner speech might help people in Information Search and Alternative evaluation and affect decision time. The findings also suggested what people may consider and how they use inner speech. &#13;
By uncovering the potential relationship between the purchasing process and inner speech, this research provided valuable information for marketing and psychology research fields. It gave companies some suggestions for practical use, reflecting how people may use inner speech during the purchasing process.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3615">
                <text>Inner speech, memory, decision-making, purchasing behaviour.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3616">
                <text>This study was approved by ethics committees at Lancaster University. There were no ethical issues for researchers managing the personal information. The participants’ information remained anonymous and were assigned subject ID (P01, P02, P03…, P30 in Experiment 1 and PCT01, PCT02, PCT03…, PCT30 in Experiment 2). All data were stored anonymously with no identifiable information. &#13;
Participants were given the Participant Information Sheet (PIS) before participating in the experiments. On the day of testing, they asked any questions they might have, then consented to attend the experiment in person or via online platforms like Microsoft Teams, Zoom, or Google Meet to ensure that the suppression was active when needed. The experiment took approximately 30 minutes, including answering all questionnaires. The experiment was held in the participant’s home or a place where no one spoke so that the participant would not be disturbed by any chance.&#13;
Experiment 1&#13;
Participants&#13;
G*power suggested 52 participants within groups using t-tests and multiple mixed linear regression models, with a .4 effect size and .05 (5%) a-error probability in 80% power (1-b error of probability) (Brysbaert, 2019). Thirty participants were recruited in this experiment with no record or history of neurophysiological disorders, such as dyslexia or aphasia, to ensure that no conditions influence the result and affect the participant to complete the tasks in the experiment. The recruitment process included in-person invitations around campus and social media messages to reach diverse participants.&#13;
Although only 30 participants were recruited in this experiment, the results of the t-tests suggest that the effect size (see Experiment 1 result section) may be enough for testing the hypothesis.&#13;
Design&#13;
This study was an experimental within-subjects design. Participants simulated purchase experience in the suppression task and the control task without interference assigned to them. The independent variables were self-rating agreements on information search and alternative evaluation and participants’ average decision time in the suppression and control tasks. The dependent variables were inner speech frequency in five dimensions measured by the Inner Speech Frequency Questionnaire (VISQ). &#13;
Quantitative data were analysed using R to conduct t-tests, GLMM and CLMM. Secondly, qualitative data were collected through questionnaires and categorised into different variables to identify why participants made the decisions and their inner speech content during the purchasing process.&#13;
Overall, the experiment aims to investigate how people use inner speech during purchasing and whether Articulate Suppression task and task without interference influenced decision time and agreement score on information search and alternative evaluation.&#13;
Materials&#13;
Stimuli&#13;
Participants viewed six product sets (stimuli), which information was copied from the official website. To prevent participants from focusing on the effect of the products’ brands and prices (Albari &amp; Safitri, 2020), the products in each set were the same brand with similar or the same price, unisex, and recognisable, although these products might not exist or remain the latest information on the market.&#13;
Two-item Statement Questions (see Appendix B)&#13;
	Participants rated the two statements on a seven-point Likert score from strongly disagree to strongly agree (Maity &amp; Dass, 2014) to identify the Information Search and Alternative evaluation agreement level between tasks. Then, participants were asked: “Which product did you choose? Why?” after each purchasing decision.&#13;
Variety of Inner Speech Frequency Questionnaire (VISQ, see Appendix C)&#13;
The Inner Speech Frequency Questionnaire (Alderson-Day et al., 2018) included twenty questions asking participants to generally rate their inner speech frequency after the mock e-commerce purchasing tasks with a 7-point Likert scale ranging from "Never" to "All the time". Questions 7 and 15 were reversely coded; the value should be reversely calculated when doing analysis.&#13;
Experiment 1 Qualitative Questions (ExpQ1, see Appendix D)&#13;
After participants finished all the tasks (six decisions), they were asked to answer three questions at the end of the experiment. These questions gathered qualitative data about the participants’ experiences during the mock e-commerce purchasing tasks and what they had in mind. &#13;
Procedure&#13;
Figure 2 illustrates the diagram of Experiment 1. Participants were invited and consented to join the research to do Suppression and Control (without interference) tasks. &#13;
Each task contained three product sets; participants were asked to imagine and choose a product for themselves or a friend according to the provided information on the mock e-commerce channel (Maity &amp; Dass, 2014). The screen of the researcher or participants presented the information, including the price and details of the product set. Since these two tasks are counterbalanced and randomly ordered, participants repeated the decision-making process three times in the control task and the other three in the suppression task. After each decision, participants answered the two-statement questionnaire and explained which products they chose and why they chose them. According to different tasks, they started with the control task by themselves. However, they were asked to practise counting out loud from 1 to 4 following 160 bpm metronome sounds until the researcher ensured they remained suppressed before starting the suppression task.&#13;
Then, they answered VISQ, which measured their inner speech frequency and qualitative questionnaires (ExpQ1) to understand how they used inner speech when viewing the products in the last part of the study. &#13;
Analysis&#13;
R was used to analyse the quantitative data to identify the task differences via t-tests and the relationship between variables in two tasks via Generalised Linear Mixed Effect Models (GLMM) and Cumulative Link Mixed Model (CLMM). When conducting the GLMM with family gamma, the quantitative data will follow the standard procedure of data trimming and keep the trimmed data within 5% or 2.5 standard deviations (Berger &amp; Kiefer, 2021). &#13;
The qualitative coding scheme (See Appendix F) was created to identify what participants considered and what they said to themselves using inner speech during the experiment. The coding process involved re-reading the data to identify and assign relevant contexts to the appropriate categories. For example, if participants mention that they have used the product before, the value of the variable “Memory” increases by one unit. These variables were then calculated to identify what factors influenced participants’ purchasing decision-making more. Following the same coding scheme, what kind of inner speech was used when viewing the products could also be found. For example, people may ask themselves questions or repeat the product in mind.&#13;
In summary, Quantitative and qualitative data were analysed to report the results for different purposes and test the hypothesis in this research.&#13;
 &#13;
Figure 2&#13;
The Diagram of Experiment 1 Procedure&#13;
 &#13;
Note: Participants were required to do suppression and control tasks, the order was randomised and counterbalanced. The products presented during the tasks were also randomised.&#13;
&#13;
Experiment Optimising&#13;
The task without interference in Experiment 1 may not be a reasonable control task since it might include the secretary task effect, as participants were asked to do both tasks and be influenced after they did the suppression task when they were doing the control task. &#13;
As a secretary task, the finger-tapping task, which has been used in inner speech experiments, could be the better control task in Experiment 2 (Emerson &amp; Miyake, 2003; Wallace et al., 2009). Although Finger-tapping might influence working memory’s function and influence people to memorise (Armson et al., 2019; Kane &amp; Engle, 2000; Moscovitch, 1994; Rose et al., 2009), Rogalsky et al. (2008) also mentioned that the performance of people’s understanding of complex sentences might decrease but not as much as suppression occur. &#13;
Therefore, doing the second experiment was motivated to replicate the results with a better control condition involving Finger-tapping.&#13;
Experiment 2 &#13;
Participants&#13;
Based on the findings of Experiment 1, another 30 participants were recruited with the duplicate requirements as the first experiment. The recruitment requirement and process were the same as in the previous experiment.&#13;
Design&#13;
The independent variables were similar to Experiment 1, while the only difference was that the control task here had been changed into the Finger-tapping task. The goal of the whole design is to replicate the results of Experiment 1 to investigate the role of inner speech in the purchasing process.&#13;
Materials&#13;
Experiment 2 applied the same materials used in Experiment 1. The only difference was the qualitative questions after tasks. In Experiment 1, participants answered “Experiment 1 Qualitative Questions” at the end of the experiment. However, to better understand the difference between tasks, they were asked to answer a similar questionnaire (see below) after each task to discover the inner speech used in the two tasks.&#13;
Experiment 2 Qualitative Questions (ExpQ2, see Appendix E)&#13;
Participants were asked to answer three questions about their experiences during the mock e-commerce purchasing tasks and what they had in mind for the Suppression and Finger-tapping tasks separately.  &#13;
Procedure&#13;
The procedure was the same as the first experiment, except for adjusting the control task and the order of the qualitative questionnaire (ExpQ2). Figure 3 illustrates that participants were invited to the experiment using the same stimuli, similar questionnaires, and the same method of presenting stimuli (participants joined in person or via online platforms) with Suppression and Finger-tapping tasks. Participants were asked to practice counting 1,2,3,4 out loud or tapping their index, middle, ring, and little fingers in order (see which task came first) following metronome beats at 160 bpm before the researchers decided to move on. They were asked to view the product set by imagining choosing one for a friend or themselves three times in each task. Participants answered two statements and answered what product was chosen and why after each decision they made. Then, they were asked to answer three Qualitative questions (Appendix E) after each task. They repeated another task in the same process afterwards with a 2-minute break between tasks. After they finished the Finger-tapping and Suppression tasks, they answered VISQ questions at the end of the experiment.&#13;
Analysis&#13;
R was also used to analyse the quantitative data for the same purposes and followed the same data-trimming procedure if needed. The same coding scheme was followed to generate the result that could replicate and optimise the clarity of the Experiment 1 results. Overall, the second experiment is to generate the same or more evident results as Experiment 1 and to find more valuable information for the different inner speech used between tasks.&#13;
In conclusion, these two experiments and the analysis might give this research a deeper understanding of inner speech and its role and provide more precise information on how inner speech may related to the purchasing process.&#13;
Figure 3&#13;
The Diagram of Experiment 2 Procedure&#13;
 &#13;
Note: Participants were required to do suppression and control tasks, the order was randomised and counterbalanced. The products presented during the tasks were also randomised.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3617">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3618">
                <text>The data format is csv.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3619">
                <text>Wang03092023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3620">
                <text>Han-Yi Wang</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3621">
                <text>Open (unless stated otherwise)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3622">
                <text>None (unless stated otherwise)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3623">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3624">
                <text>Data or text</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3625">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3626">
                <text>Inner Speech and Its Role in Purchasing Decision-Making Process: Analysis of Within-Subjects Experiment and Questionnaires</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3627">
                <text>Dr. Bo, Yao</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3628">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3629">
                <text>Cognitive&#13;
Cognitive - developmental&#13;
Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3630">
                <text>60 participants, 30 in the Experiment 1 and 30 in the Experiment 2.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3631">
                <text>Quantitative- t-tests, GLMM, CLMM &#13;
Qualitative-Thematic</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="180" public="1" featured="0">
    <fileContainer>
      <file fileId="201">
        <src>https://www.johnntowse.com/LUSTRE/files/original/a4991188e6e5a175e3e601f45ba8d3d5.csv</src>
        <authentication>c132301a3565074e33f0070c2a24dfd8</authentication>
      </file>
      <file fileId="202">
        <src>https://www.johnntowse.com/LUSTRE/files/original/08c1180ec22a5097b67bdf27998d19cd.csv</src>
        <authentication>8b729f810fe7b7fa25327f0ec2d0e5be</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3632">
                <text>Inner Speech and Grit: Do Positive Inner Speech and Evaluative Inner Speech Lead to Grit Behaviour</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3633">
                <text>Huzaifah Adam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3634">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3635">
                <text>Grit, defined as perseverance and passion for long-term goals, is a reliable predictor of success metrics, surpassing even IQ. While the exploration of grit has been conducted extensively, studies on the mechanisms of grit are still lacking. Inner speech, the silent production of words in one’s mind, plays a pivotal role in managing thoughts. This includes cognitive reframing, which is essential for enhancing perseverance. Theoretically, inner speech can predict grit. This study, employing a survey and experimental design, aims to investigate whether positive inner speech and evaluative inner speech can predict grit behaviour. The data for this study (n=56) were collected in two ways: (1) using the grit scale and inner speech VISQ-R via a Qualtrics survey, and (2) using participants’ task retention decisions and a qualitative classification approach. The data were analysed using R Studio. The survey data were analysed via a linear model, while the qualitative data were analysed using a generalised linear mixed-effects model. The survey results showed that only evaluative inner speech can positively predict grit. However, there were imbalanced results regarding the participants’ task retention decisions. Collectively, these findings underscore that grit can be predicted by evaluative inner speech. This prompts further research to explore its multifaceted role in shaping grit across various domains.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3636">
                <text>Inner speech, grit, articulatory suppression</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3637">
                <text>This study applied a mixed-method and correlational research design that aims to examine whether evaluative inner speech and positive inner speech lead to grit behaviour. The data for this study were collected using two methods: (1) questionnaires through a Qualtrics survey, and (2) an experimental task where the participants were asked to complete two sets of puzzles under different conditions (baseline and with articulatory suppression) and provide their retrospective experience after each puzzle task. Participants’ task retention decisions (decision to quit) were also recorded in the study. Three different analyses were applied in the research. For the first analysis, the positive inner speech and evaluative inner speech scores from VISQ-R acted as the predictors, and grit from the Short Grit Scale as the outcome. For the second analysis, the participant’s grit score acted as the predictor, and the participant’s task retention decision acted as the outcome. Lastly, the third analyses the types of inner speech based on the participant’s retrospective experience (positive inner speech and evaluative inner speech) acted as the predictors, and the participant’s decision to quit or not to quit was the output.&#13;
&#13;
In this study, the participants were students from Lancaster University, ranging from undergraduate degree students to master’s degree students and doctorate students. Participants were recruited using social networks, direct emails, and posters around the campus and/or on social media. The session took approximately 30 minutes for the data collection process, including the briefing, and each participant was reimbursed with five GBP for participating. Ethical approval for this study was submitted and approved by the ethics committees at Lancaster University.&#13;
&#13;
The number of participants involved in the study was 56 people in total. This number was determined by using G power. The test family was set at the t-test because this research will use a comparison between the control approach (baseline) and the experimental approach (with articulatory suppression). The effect size f2 was set at 0.15, while the α-error probability was set to 0.05 (5%) and the power 1−β error of probability at 0.8 (80%), with the number of predictors set at five. In total, 56 participants took part in the study, where the number of male and female participants was 23 (41%) and 33 (59%), respectively, and the number of native English participants in the study was 15 (27%), while non-native speakers were 41(73%).&#13;
&#13;
Demographic Information: The demographic information collected pertained to each&#13;
participant’s attributes. This included sex (male, female, non-binary/third gender, and prefer not to say) and English native background (yes or no). Although the study has no biases towards the participant’s native language, the word used in the study ‘aluminium’, a word that is suggested by Gathercole and Baddeley (2014) for the research, may or may not influence the fluidity of pronunciation, making the articulatory suppression more challenging for non-native speakers.&#13;
&#13;
Varieties of Inner Speech Questionnaires Revised (VISQ-R): The VISQ-R was developed to link the everyday phenomenology of inner speech, including any psychopathological traits and inner dialogue (Alderson-Day et al., 2018). There are two versions of the Varieties of Inner Speech Questionnaire, where the original one consisted of 18 items and the revised version VISQ-R consisted of 26 items (see Appendix D) that took approximately 5-8 minutes to be completed via a Qualtrics survey. In this study, VISQ-R has been presented as internal experience questions as a dummy to the real name. This is to eliminate any possible biases by the respondents.&#13;
&#13;
Responses from VISQ-R can be subdivided into five dimensions and into seven scales (Not like me at all – Very much like me) for scoring: dialogical, evaluative, condensation, other people, and positive. A higher score in dialogical indicates that the person often uses inner speech to exchange ideas with oneself and vice-versa. A higher evaluative score means that the person often uses inner speech to evaluate their thoughts, actions, and decisions. For condensation, a higher score indicates that a person talks to themselves in a concise or short words manner to encapsulate complex thoughts or ideas. Meanwhile, a higher ‘other people’ score indicates that a person often imagines other people’s voices or opinions when engaging in inner speech. Lastly, a high positive score indicates that the person often uses inner speech to encourage oneself in a supportive and comforting manner. Subscale totals for each dimension were acquired by adding the scores for each subscale and dividing it by the total number of items answered across the respective subscale.&#13;
&#13;
The Varieties of Inner Speech Questionnaire has been supported for its reliability and validity in measuring inner speech. Racy et al. (2022) have studied the reliability of VISQ-R and compared it to six other instruments relating to inner speech. VISQ-R has moderate to strong concurrent validity with other instruments with self-evaluation showing a strong correlation with other measures. The internal consistencies and reliabilities were excellent (Cronbach’s α &gt; .80) for each of the dimensions with only a positive dimension that is slightly lower with moderate to high test-retest reliability (&gt;.60) (Alderson-Day et al., 2018).&#13;
Short Grit Scale (Grit-S): The questionnaire of Grit-S was developed by Angela Duckworth to measure the trait level of perseverance and passion for long-term goals (Duckworth &amp; Quinn, 2009). The Grit-S consisted of eight items of questions (See Appendix D) with four fewer items in comparison to the original version, retaining the factor structure and improving on the psychometric properties. The questionnaire needs an approximation of 3- 5 minutes to be completed in the Qualtrics survey. Similar to VISQ-R, the Grit-S questionnaire has been presented as a personality instead of a grit scale to avoid any possible biases.&#13;
There are two dimensions included in the Grit-S for scoring: Consistency of Interest, where a higher scale subscale score indicates that the individual is able to maintain their interest for and focus on their long-term goal, and Perseverance of Effort, where a higher subscale score represents sustained effort towards a long-term goal despite the presence of setbacks (Van Doren et al., 2019). The subscale for the dimension of Consistency of Interest is acquired by adding the scores for all the subscale items (item-1, item-3, item-5, and item-6), while for Perseverance of Effort (item-2, item-4, item-7, and item-8). There are a few items that have been coded inversely and have been recoded before running the analysis.&#13;
Several research studies have confirmed the validity and reliability of the Short-Grit Scale Instrument. Eskreis-Winkler et al. (2014) conducted a study involving predicting retention in the military where the grit instrument was used to measure the grit level of cadets. The instrument has been proven to be reliable as grittier soldiers were more likely to complete the Army Special Operation Forces (ARSOF), likely to get a job, and likely to stay married. In a more recent study by Priyohadi et al. (2019), the Grit-S again proved its validity and consistency. The internal consistencies between items in a dimension were moderate to high (&gt;.60) for both persistence of effort and consistency of interest and have high consistencies between studies.&#13;
Active Task: The jigsaw puzzle was used as the active task for this research. Two jigsaw puzzles from Livewire Puzzles were predetermined by the website as expert-level with 70 puzzle pieces (10 X 7) with an 8-minute time limitation. The puzzle can be accessed through the games.puzzle.ca website. The puzzles have been created by Arkadium, a company that is well-recognised in making online games. New puzzles have been uploaded daily, but to avoid any possible advantage or disadvantage, the puzzles used are from the 22nd of June 2023 and 21st of June 2023. Marks will also be provided at the end of each puzzle.&#13;
&#13;
There are two ways of measuring participants’ performance: (1) Quitting - participants were allowed to quit the task at any time during the 8-minute time limit by telling the researcher present that they want to stop, and (2) Puzzle performance - marks will be given at the end of the puzzle (marks will be given even if participants quit halfway) by the source website. The marks will be calculated based on the number of puzzles fixed correctly and then divided by the total number of unfixed puzzles and will be multiplied by the amount of time left in the puzzle. The maximum score of the puzzle is 5,000 and the minimum score is zero. All calculations will be automatically measured by the source website.&#13;
The puzzle from Livewire Puzzle has also been used by other studies that focus on measuring grit using an active task. Kalia et al. (2019), similar to this study, used puzzles from Livewire Puzzle as an active task to measure perseverance in participants. Instead of using a jigsaw puzzle, Kalia opted to use sudoku to measure the role of grit and cognitive flexibility 2.4 Procedure&#13;
The research took place in one-on-one sessions at the Lancaster University library. Data collection sessions were administered in the following order: demographic information, the first puzzle task, the difficulty level question, the subjective inner speech question, the second puzzle task, the second puzzle difficulty level question, and finally, the second subjective inner speech question. Each participant undertook the puzzle task in both control (baseline) and experimental conditions (with articulatory suppression). The sequence of which puzzle task they had to complete first was decided based on the participant’s subject ID assigned by the researcher. Participants with odd Subject ID numbers were assigned the control puzzle task first, while participants with even Subject ID numbers were assigned the experimental puzzle task first. Before starting the experimental puzzle task, the researcher spent a few minutes helping the participants practice performing the articulatory suppression by saying the word ‘aluminium’ repeatedly at 90 BPM using an online metronome. Throughout the experimental task, if the participants mispronounced the word too obviously or consistently missed or skipped a beat, the researcher aided them by correcting their pronunciation or assisting them to meet the 90 BPM until they matched the rhythm again.&#13;
&#13;
During data collection, the researcher offered participants an opportunity for a break between puzzles if they began to get tired to prevent their answers from being expedited. The participants were also allowed to ask any questions while they were completing the questionnaire to clarify their understanding of the items presented. At the end of each data collection session, the researcher thanked the participants for their participation and answered any questions that they had. The researcher also explained that participants would be emailed a participant debrief sheet and could request a summary of the study’s findings once data analyses had been completed. For participants who were eligible for reimbursement of travel expenses, they were asked to fill out a participant payment form as a receipt of confirmation that they had been paid.&#13;
&#13;
Three different models of analysis were carried out in the study. To measure the first prediction, a linear model was used by entering the positive inner speech and evaluative inner speech scores from the VISQ as the predictors and the grit score from the short grit scale as the output. For the second prediction, a linear model was used with the outcome set at the participant’s decision to quit or not to quit and the predictor set as the interaction between different experimental conditions and grit. To measure the third prediction, a generalised linear mixed-effect model was explored by entering the interaction of different experimental conditions and dimensions of inner speech (evaluative inner speech and positive inner speech) recorded from the participant’s retrospective experience as the predictor and participant’s decision to quit the task as the outcome. In this model, a random effect of differences between the conditions (baseline and with articulatory suppression) in slope and participants in the intercept were also included.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3638">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3639">
                <text>The data format is csv.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3640">
                <text>Adam2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3641">
                <text>Huzaifah Adam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3642">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3643">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3644">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3645">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3646">
                <text>Dr. Bo Yao</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3647">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3648">
                <text>Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3649">
                <text>56 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3650">
                <text>Linear Model, Qualitative</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="184" public="1" featured="0">
    <fileContainer>
      <file fileId="206">
        <src>https://www.johnntowse.com/LUSTRE/files/original/59b8e43067d35e93f5ee81d15c7a4b64.doc</src>
        <authentication>dd3a76eadafef3ed40d8695df9cd80d9</authentication>
      </file>
      <file fileId="207">
        <src>https://www.johnntowse.com/LUSTRE/files/original/c4922da9b1039eb0f71b063458d30d9a.doc</src>
        <authentication>d3b28f1f9a54f497a67f37cd73e2b66c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="9">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="499">
                  <text>Behavioural observations</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="500">
                  <text>Project focusing on observation of behaviours.&#13;
Includes infant habituation studies</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3672">
                <text>Third Parties and Police Use of Lethal Force: Evidence from the Mapping Police Violence Database </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3673">
                <text>Sian Reid</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3674">
                <text>6th September 2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3675">
                <text>Over recent years media coverage has highlighted the use of excessive force by some police officers. The use of lethal force towards black and other ethnic minority citizens has been identified as a cause for significant concern. Research in the bystander literature and in non-fatal force policing contexts has identified that third parties can have positive impacts in reducing the severity of these incidences. The role of third parties in fatal force events, however, has not been investigated. This is something which the current study seeks to address. The Mapping Police Violence database was used to identify a year’s worth of lethal force events in the US. Newspaper articles relating to these incidents have been coded in line with a predefined coding framework to examine the presence of third parties in these incidents, and the nature of any social relationships with third parties in relation to the type of lethal force utilised. The results revealed that third parties were present in just under half of incidences and that the presence of a third-party with a pre-existing social relationship to the citizen was associated with a lower likelihood of officers utilising forms of ‘less lethal’ force to the extent that it results in a citizen fatality. These findings highlight the potential importance of third parties in understanding the nature of lethal police citizen interactions, and also the potential protective role the presence of known others may have in reducing the likelihood of officers excessively utilising forms of less lethal force. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3676">
                <text>Lethal force, Third Parties, Police Citizen Interactions, Use of Force</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3677">
                <text>A secondary data analysis was utilised to examine the presence of third parties in incidences of police use of lethal force. The Mapping Police Violence database (Mapping Police Violence, 2020) was the primary dataset utilised for the study. This is a freely available and open public database compiled by researchers in the US which aims to provide a record of all police involved deaths in the US. This database has been recording police involved deaths in the US since 2013, primarily gathering information through news articles published by various American news outlets. The type of force engaged in by officers that resulted in death was utilised as the outcome variable. The predictor variables were the presence of third parties, the presence of any known third parties, or unknown third parties, the number of officers present, the presence of other emergency services, the location of the incident, the race of the citizen, the gender of the citizen, the alleged presence of a weapon, the initial reason for the encounter, the presence of any digital technology capturing the event and the level of threat posed to the officer. &#13;
The Mapping Police Violence database records multiple variables in relation to these incidences, including individual and situational factors. Several of the predictor variables included in the current study have been gathered from this dataset; specifically, the type of lethal force used, the alleged presence of a weapon, the race of the citizen, the gender of the citizen, the level of threat posed to the officer, the initial encounter reason and the presence of a body worn camera. Within the current study, most of these variables have been used as recorded in the dataset, however, the level of threat posed to the officer has been recategorized. The multiple different levels of threat recorded in the dataset have been regrouped into three categories: attack (indicating the greatest level of threat to the officer), other (referring to any other level of threat), and none (for incidences in which it was clear there was no threat to the officer). In the original data only the presence of a body worn camera is recorded. For the current study this variable has been transformed to include the presence of any digital technology capturing the event, such as CCTV or smartphones, as research has found that the presence of any digital technology and not only a body camera can affect police citizen interactions (Shane et al., 2017). &#13;
The Mapping Police Violence database records the citizen’s cause of death in relation to the type of force utilised. In incidences where multiple types of force have been identified as contributing to the citizen’s death, the database records a list of all types of force involved. The types of force included in the database include gun, taser, pepper spray, baton and physical restraint. For the current study, these types of force have been grouped, to provide an outcome variable with fewer levels. The grouping of the outcome variable has been done in line with previous research looking at police use of force, which identified a gun as a distinct type of force due to the increased risk of lethal outcomes. The other types of force are grouped into a second category of other types of ‘less lethal’ force, as these types of force have been identified as alternatives to the use of a gun, which would be expected to reduce the likelihood of a citizen fatality (Sheppard &amp; Welsh, 2022). In incidences where multiple types of force were used, the most severe form of force has been recorded; for example, if the cause of death is attributed to a gun and a taser, then this incident would be recorded as a gun as the type of lethal force utilised.&#13;
The dataset contains links to the news articles which have been used to gather information regarding each of the individual police involved death incidences. The variables included in the current study relating to the presence of others were gathered by coding these news articles which are linked in the database to the individual incidences of police involved deaths between 6th March 2022 – 6th March 2023, providing a sample of 1,257 police involved deaths. News articles are a source of information which have been identified as having certain limitations, particularly relating to potential media bias in the reporting of crime related stories (Lawrence, 2000). Research looking at the reporting of police use of force incidences by newspapers, however, has found that for many factors there was consistency between news reports and police reports of the same incidents (Ready et al., 2008). For the current study, news articles are utilised due to the promise they provide in allowing the events of police involved deaths to be examined in relation to the presence of third parties. &#13;
To identify the relevant incidences for the current study, three primary exclusion criteria were applied prior to the coding of the news articles. Firstly, to identify incidences with news articles with sufficient information to allow the presence of third parties to be examined, a minimum word count of 150 words was required in at least one of the associated news articles. Secondly, as the study’s primary interest was in the use of lethal force, which involves an on-duty officer using force, only incidences relating to on duty officers were included. Finally, incidences in which the use of force by the officer was accidental, such as car crashes that police officers were involved in, were excluded, as these events have different characteristics to those in which officers intentionally engage in the use of force towards a citizen. The application of these exclusion criteria left a sample of 1052 incidences of police use of lethal force.&#13;
To investigate the presence of others in these incidences, prior to the analysis a predefined behavioural coding scheme (Philpot et al., 2019) was created and applied to the news articles to capture the presence of third parties. This coding scheme contained 12 individual items capturing the presence of third parties and any social ties between third parties and the citizen involved in the incident (See Appendix A for the full coding scheme). Two additional items were included to capture the presence of multiple officers or other emergency services. One code regarding the location of the incident was also included to capture whether it occurred in a public, semi-public or private location. Each of the items were coded for presence with a 1, their absence recorded with a 0, or if it was not clear whether this item was present a 99 was recorded. In total 15 codes were included in this behavioural coding scheme. Here are some examples of these codes relating to the presence of third parties:&#13;
“The presence of a third-party with a pre-existing social connection to the primary citizen involved”&#13;
“The presence of more than one officer”&#13;
“The presence of a third-party with no pre-existing social connection to the primary citizen involved”&#13;
To facilitate the process of coding the news articles in line with the coding scheme, a Qualtrics survey (https://www.qualtrics.com) was created. This survey presented the individual items within the coding framework in a questionnaire format, allowing the items to be coded in the format of closed ended responses to questions relating to the presence of third parties. The responses from the survey were then transferred to an Excel document to allow the data to be prepared for analysis. &#13;
Ethical approval has been obtained for this study. The study has been reviewed and approved by a member of the Lancaster University Psychology Department, the ethics partner of the supervisors. &#13;
The reliability of the coding scheme and its application to the news articles was assessed through the double coding of 10% of the sample by a second researcher separately to the primary researcher. To assess the level of agreement between the two researchers for each variable, Gwet’s AC1 (Gwets, 2014) coefficient was calculated. In line with the recommendations of Landis and Koch (1977), the resulting coefficients were interpreted in the following way: a value of 0.4 or above indicating moderate agreement, a value of 0.6 or above indicating substantial agreement, and finally a value of 0.8 or above, indicating almost perfect agreement between raters’ scores. For 13 of the variables an agreement level of substantial or almost perfect was reached, as seen in table 1 (appendix B). For the variable relating to the third-party being a friend of the citizen there was no variation in responses (i.e., 100% agreement), and therefore a coefficient could not be calculated. For the location variable, only a moderate level of agreement was found, as a result this variable was excluded for the purpose of analysis. &#13;
Figure 1 depicts a flowchart of the process undertaken to sample the relevant incidences. The first part of the flowchart shows the initial process that was undertaken to identify all police involved deaths recorded in the Mapping Police Violence database in the prior 12 months. Following the initial data collection procedure descriptive statistics were run which highlighted that in the initial sample of 1052 incidences there was very limited variation in the outcome variable of the type of lethal force utilised by officers, with 990 incidences involving a gun as the primary cause of death, and only 62 incidences involving other forms of force. In this initial sample a citizen’s cause of death not involving a gun would statistically be considered a rare event, which would have presented challenges in utilising this variable as the outcome in any subsequent analyses. In line with the recommendations of research (Shaer et al., 2019), an oversampling approach was chosen to overcome the limitations of having a rare event in the outcome variable, with further incidences in the dataset that did not involve a gun as the cause of death being oversampled so at least 10% of the sample involved a cause of death other than a gun. As can be seen in figure 1, for these incidences to be as similar to the primary sample as possible, they were only sampled for the three preceding years to limit any additional sample variation that may have been introduced by sampling a wider date range. This led to the identification of a further 182 incidences where the citizen’s cause of death did not involve a gun. The same exclusion criteria were then applied to this sample, with a further 65 incidences excluded, leaving a sample of 117 additional incidences which were coded in line with the same procedure as the initial sample. This oversampling procedure led to a final sample of 1169 incidences. &#13;
&#13;
The data analysis involved chi square tests of independence, to examine whether the presence of others during fatal police citizen interactions had a statistically significant relationship with the outcome variable of the type of lethal force utilised by officers. Due to the exploratory nature of the study there was not a predicted direction or nature of the relationship between the predictor variables relating to third-party presence and the type of fatal force utilised by officers (McIntosh, 2017). Prior to the main analyses, descriptive statistics were run to investigate distributions within variables and to allow any rare event variables to be identified. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3678">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3679">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3680">
                <text>Reid2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3681">
                <text>John Oyewole&#13;
Michelle Kan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3682">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3683">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3684">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3685">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3686">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3727">
                <text>Dr Mark Levine&#13;
Dr Richard Philpot</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3728">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3729">
                <text>Social Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3730">
                <text>1169 incidents</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3731">
                <text>Pearson's Chi Square&#13;
Chi Square Goodness of Fit</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="185" public="1" featured="0">
    <fileContainer>
      <file fileId="204">
        <src>https://www.johnntowse.com/LUSTRE/files/original/b8f75dc1e1ab0f20a5a61b57fddeba52.doc</src>
        <authentication>4d757be9d7867a128bce4cbedd7dbab9</authentication>
      </file>
    </fileContainer>
    <collection collectionId="10">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="819">
                  <text>Interviews</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3687">
                <text>Can We Reduce Childhood Obesity in the Community? A qualitative Perspective that Discusses the Barriers and Strategies to Childhood Obesity within Miles Platting and Newton Heath.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3688">
                <text>Charlotte Graham</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3689">
                <text>2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3690">
                <text>Childhood obesity (CO), which can have long-term negative health issues, has increased dramatically over the last thirty years. Given this, NHS Manchester has commissioned this study, with a particular focus on the Manchester boroughs of Miles Platting and Newton Heath, due to the high rates of CO in those areas, to explore the relevant dynamics involved, understand the barriers to healthier eating/lifestyles and derive strategies to combat CO. A semi-structured interview style was utilised, with healthcare professionals. Within these interviews, the healthcare professionals commented on their experiences of CO within their job roles and what they believe to be the barriers for parents to be for CO. Their thoughts based on parents' experience with CO were formed due to working with parents and discussing these barriers with them. It was found that a child's home life massively impacts the likelihood of a child's obesity, with parental education, motivations and poverty playing significant roles, along with a parent’s lack of skills, knowledge, money, and time. Based on these factors, strategies are discussed that have been successful or unsuccessful previously, as well as ideas for future strategies. Based on these findings, it is suggested that collaboration between the different services offered within the Manchester area offers scope for improvement, while strategies to help reduce CO need to focus on a ‘show and tell’ aspect whereby individuals receive immediate support, such as having access to healthy food, while gaining the practical skills to help them create a sustainable change, such as learning how to cook or budget. These strategies are discussed about the general community and specific goals for NHS Manchester to increase the likelihood of healthier lifestyles being adopted.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3691">
                <text>Childhood Obesity. Poverty. Education. Barriers. Strategies. Recommendations.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3692">
                <text>Sample &#13;
Nine people participated in the current study. Each of these participants were healthcare professionals over the age of 18. The term ‘healthcare professionals’ was a broad term for anyone in a professional capacity who dealt with CO in their job role. The initial participants were recruited from the contacts of the NHS Manchester Local Care Organisation, and then a snowball sample from these initial participants. The job roles presented in this sample included a business manager for a school, a school meal supervisor, a GP nurse, a bursary manager, and an array of service workers for the local community in different services, such as the Healthy Weight Team in Manchester. Each participant worked within Manchester, specifically Miles Platting and Newton Heath. &#13;
&#13;
Design and Materials &#13;
Ethics&#13;
Data collected in this qualitative study was reviewed and approved by the Faculty of Science and Technology Ethics Committee at Lancaster University (see Appendix A). All the participants were provided with information about this study and knew their ethical rights, such as the right to withdraw, confidentiality, and data protection. &#13;
&#13;
Procedure&#13;
The initial participants were introduced to the researcher via email by a Manchester Local Care Communication member. Once this introduction had taken place, communication about the research and the arrangement of the interviews were discussed between the researcher and the participant through email. After completing their interview, these participants introduced the researcher to their other contacts through email (snowball sample). Email was the primary contact method for each participant and the recruitment process. &#13;
Each interview was an online semi-structured interview, lasting between 30-60 minutes. The online software used was Microsoft Teams, which facilitated the discussion, recorded it and created a transcript. Due to the limitations of the software, the audio and visual information of the Microsoft Teams Meeting would be recorded. Therefore, the participants were informed of this limitation before the recording and asked if they would like to turn their cameras off.  While the interview was ongoing, a discussion guide and prompts for further elaboration on their answers were used.&#13;
&#13;
Footnote&#13;
The initial methodology planned to include interviews with parents who had children at a primary school age. However, the logistics, timing and lack of engagement made this impossible, meaning no parents were included in the sample. Due to this, a parental perspective from the viewpoint of healthcare professionals was asked in the interviews. The viewpoint was informative due to these healthcare professionals' interactions with the parents, which provided insight into parents' thoughts about CO. However, this is from a secondary source, so an element of accuracy needed to be considered. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3693">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3694">
                <text>Text/Word doc.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3695">
                <text>Graham2023</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3696">
                <text>Georgie Comerford&#13;
Katy Nichol</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3697">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3698">
                <text>None.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3699">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3700">
                <text>Interviews</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3701">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3702">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3703">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3704">
                <text>Marketing, Developmental, Social.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3705">
                <text>9</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3706">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
