<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=12" accessDate="2026-05-03T09:36:37+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>12</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="71" public="1" featured="0">
    <fileContainer>
      <file fileId="25">
        <src>https://www.johnntowse.com/LUSTRE/files/original/2d240b7ef45b825fd4cfdb477cc8aa00.pdf</src>
        <authentication>9b4db285519912b22505ae113ad6ad1b</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1675">
                <text>Contrast polarity of a stimulus does not affect the cueing effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1676">
                <text>Eleni Sevastopoulou</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1677">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1678">
                <text>According to the contrast polarity effect, people’s attention is sensitive to dark objects within light backgrounds. According to the gaze-cueing effect, a human gaze shift attracts people’s attention towards the direction of the darker region of the observed eyes, thus the gaze-cueing effect depends on the contrast polarity of the observed eyes. Therefore, a human gaze is perceived as a darker spot within a lighter background. In the present study, combining the contrast polarity effect and the gaze-cueing effect we examined whether the colour contrast between a black and a white square that suddenly flip on a computer screen can have a similar effect to that of gaze-cueing. The prediction was that participants would perceive the side where the black square moved after the flipping as attentional cue, therefore when an object appeared on the side that the black square moved, reaction times would be shorter compared to when the object appeared on the opposite side. The results showed that reaction times in the two conditions did not differ significantly. Thus, the contrast polarity of a stimulus does not affect the cueing effect. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1679">
                <text>Gaze cueing&#13;
Contrast polarity&#13;
Gaze perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1680">
                <text>The experiment was conducted using a single within-subject design. The independent variable was the cue congruency, which consisted of two conditions: the object appeared either congruently or incongruently with the attentional cue. The dependent variable was the reaction times of the participants which were measured in millisecond (ms). &#13;
Procedure. Each participant was tested individually in a quiet room at the library of Lancaster University. Participants were tested at different days and times, including morning and evening hours. The only people present in the room during the conduct of the experiment were the participant and the experimenter.&#13;
In the beginning, participants were asked to read the experiment instructions from the computer screen and they were also given clarifications, if needed, by the researcher. Afterwards, the experiment started and two squares, one black and one white, sharing one side were presented on the screen for half a second. The side that the squares shared was located at the centre of the screen, therefore one square appeared on the left side of the screen and the other one on the right side of the screen. Then the squares flipped and changed position and the apparent motion of the two squares was the cue. One second after flipping, the squares disappeared and a picture of an object randomly appeared either on the left or on the right side of the screen for one more second. Afterwards, the object disappeared and the screen remained blank. &#13;
The task of the participants was to press the appropriate keyboard button as fast and as accurately as possible, depending on the side of the screen where the object appeared. So, they had to press the «Q» button on the keyboard when the object appeared on the left side of the screen or the «P» button when the object appeared on the right side of the screen. They were given one second to respond to the object appearance. The sequence of the trials was the same for every participant. Each one of the 6 objects appeared on total 30 times congruently with the cue and 30 times incongruently with it. Thus, the total number of trials for every participant was 360, 180 trials that the objects appeared congruently with the cue and  180 that they appeared incongruently with it. The experiment lasted for 20 minutes for each participant and at the end of every session, a message appeared on the screen which informed the participants that the experiment was over.&#13;
The prediction was that the side where the black square would move after the flipping would be perceived as an attentional cue by the participants. Their gaze would be attracted to the cue and an effect similar to the gaze-cueing effect would appear. So, their reaction times would be shorter for the trials where the objects would appear on the same side with the attentional cue compared to the trials where the objects would appear on the opposite side of the cue. The independent variable was the cue congruency which included two conditions, the congruent trials (when the object appeared on the same side with the cue) and the incongruent trials (when the object appeared on the opposite side of the cue). &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1681">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1682">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1683">
                <text>Sevastopoulou2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1684">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1685">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1686">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1687">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1688">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1689">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1690">
                <text>Dr. Eugenio Parise</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1691">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1692">
                <text>Cognitive Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1693">
                <text>25 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1694">
                <text>t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="70" public="1" featured="0">
    <fileContainer>
      <file fileId="24">
        <src>https://www.johnntowse.com/LUSTRE/files/original/59a7edf93d70608679c4404a6a2cf427.pdf</src>
        <authentication>19db76515b8d3de5a0a79a15c9b3551a</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1658">
                <text>Visual engagement with different animals</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1659">
                <text>Rebecca Gregson</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1660">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1661">
                <text>People treat animals differently depending on how they are dichotomized.The present study tested the consequences of dichotomization on our visual engagement with still images of different animals. Fifty-seven participants took part in two identical image visualization tasks, the first preceded a short empathy inducing video,and the second followed. We used eye-tracking to study dwell time percentage oriented towards the eyes of companion, farmed and endangered animals. Eye-directed visual engagement was greatest for companion animals in the first image visualization task. This bias in visual engagement towards companion animals was attenuated in the second image visualization task.We hypothesised that the empathy inducing video would change gaze towards farmed animals, evidencing either increased attentional avoidance or increased engagement. Although mean averages suggest a slight increase in visual engagement following the video, this difference was not significant. Participants reported highest levels of negative emotion regarding the farmed animal’s videos. Empathic gaze with farmed animals correlated positively with participants’ level of meat consumption restriction.  The findings support several pre-registered hypotheses but disconfirm others, and are discussed in terms of the extension of empathic gaze to animals. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1662">
                <text>Animals, dichotomization, eye-tracking, empathic gaze, guilt</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1663">
                <text>Participants&#13;
Our pre-registered recruitment strategy was to collect fifty participants with complete data. Fifty participants were recruited through (1) Lancaster University’s research participation system, SONA or(2) poster advertisementand were paid £3for their involvement. Each participant saw 9 images, presented twice, each for 10 seconds, totaling 180 seconds of eye-tracking data. On first inspection of the data we were forced to exclude seven participants whose eyes had not been tracked for 50% of the experiment. To reach our pre-registered participant pool of 50 we recruited seven more participants, one of whom had to be excluded on the same grounds as previous. Our final data set was comprised of 49 participants, 36 females and 13 males. Age ranged between 18 and 30 (M= 21.10, SD= 2.13). Participants reported a range of nationalities, including: American (n=1), British (n=28), Bulgarian (n=3), Chinese (n=3), Croatian (n=2), German (n=2), Hungarian (n=2) Indian (n=3), Indonesian (n= 1), Latvian (n=1), Nigerian (n=1), Malaysian (n=1) and Slovakian (n =1). Participants dietary classifications were as follows:Meat lover(n =1), Omnivore (n =23), Semi-vegetarian (n =16), Pescatarian(n =3),Lacto-or Ovo-vegetarian(n =5), Strict vegetarian(n =0), Dietary vegan(n =0),andLifestyle vegan(n =1).Design The experiment employed a 3x2 fully within-subjects design. The independent variables were animal category and time. The variable animal category hadthree levels: farmed animals (sheep, cow, pig), companion animals (dog, cat), and endangered wild animals (chimpanzee, tiger, koala) and was operationalized using still images. Our main research interest was the distinction between farmed and companion animals, given the marginalized status of farmed animals in society and the privileged status of companion animals. Endangered animals are vulnerable to human interference and confer some value &#13;
VISUAL ENGAGMENT WITH ANIMALS.4due to their endangered status, but they are not actively used by humans as objects of consumption. For this reason, endangered animals were used as a control or comparison group. The variable time had two levels, pre-and post-video task. Participants took part in two IVT, one before a video watching task and one after. Our main dependent variable was dwell time percentage on the eyes of the animal. This was recorded during the presentation of each of the nine images in both IVT. At no other point in the experiment were eye-movements recorded. Additional outcome measures.We recorded the participants emotional state immediately after the video watching task. Participants emotion ratings were transformed into numerical valuesas follows: Extremely positive (+3), Fairly positive (+2), Slightly positive(+1), Neutral (0), Slightly negative (-1), Fairly negative (-2) and Extremely negative(-3). As a result, more negative responses were represented by a more negative value. We asked participants if they(Yes/No) contribute to the suffering and well-being of each animal category. Participants were also asked to state their agreement (Yes/No) with two statements, the first regarding their outrage having heard about the harm inflicted on animals, and the second about the animal’s capacity to suffer as being meaningfully similar to a human’s capacity to suffer.However, due to an experimenter error, these four measures were not recorded by the experiment-analysis system, and therefore cannot be discussed further.MaterialsImages. In total we sourced nine images, three for each animal category in our design. We sourced images for three different species of animal to make up our target category. The companion animal category was the only exception to this rule. For this category of animal, we used two dog images (Siberian Husky and Staffordshire Bull Terrier) and one cat image. In our original companion animal category, we had considered using the &#13;
VISUAL ENGAGMENT WITH ANIMALS.5image of a horse, but decided against this for two reasons. Firstly, the composition of the face was noticeably different in comparison to the other eight images. The horses face was longer with its eyes positioned laterally. Secondly, the category in which horses fall in to (i.e. farmed or companion) is often blurred. Whilst cows pose similar facial composite issues tothe horse, there is no question that cows are members of the farmed animal category. We decided that this justified the inclusion of the cow in the experiment, but we could not justify the use of the horse. The original source for each image is displayed in Appendix A.Due to limited financial resources we were restricted to the use of free, open-source images. This meant that the images contain some background colour and contextual inconsistencies. Nonetheless, all images share these same consistencies: forward facing gaze, minimal to no background noise and the absence of other animals. We adjusted some of the images so that the body of the animals is mostly cropped out. As a result, all nine images have a central focus on the animals face. We ensured that the images did not objectively indicate animal harm nor confinement. Finally, all animals were adult so as to avoid the baby schema effect–the finding that infantile features promote caregiving behaviour(Archer &amp; Monton, 2011; Borgi, Cogliati-Dezza, Brelsford, Meints &amp; Cirulli., 2014; Fridlund&amp; MacDonald, 1998). This was an important consideration as the baby schema effect has been linked to stronger caregiving motivationswith animals(Piazza, McLatchie &amp; Olesen, 2018).Videos. Three videos were selected to induce empathic concern with each of the three animal categories. Each video targeted a specific class of animal (companion, farmed, or endangered) and was presented prior to the second viewing session. All three videos outlined the harm inflicted upon the relevant animal category. They include emotional but not graphic content and were selected for their empathy arousing nature. To reduce any variation caused by the different music styles of the videos, all audio was removed. Videos were trimmed to &#13;
VISUAL ENGAGMENT WITH ANIMALS.6ensure that they had a similar duration time. Supplementary details of each video can be found in Appendix B. Additionally, each video can be accessed in the “Materials” section of our OSF file. Stimuli presentation. All stimuli werepresented on a Windows 10 Pro hplaptop which had a 14-inch monitor, a screen resolution of 60 Hz and the Intel® Core™ i7-4710MQ CPU processor. Stimuli ran semi-automatically. The experiment was built using Experiment Centre (Version 3.6, SensoMotoric Instruments).Eye-tracking device. Eye movements were recorded monocularly and at a frequency of 30Hz using the REDn Scientific eye-tracking device (SensoMotoric Instruments). Gaze was calibrated using a 5-point method and a calibration area of 1920 X 1080. We used a centered black cross for the fixation points during the initial calibration and throughout the experiment. These were Arial in font and 72 in size. The experiment was built to measure dwell time percentage during the IVT only. Diet. Diet was assessed using an adapted version of the 5 item dietary practice scale used by Piazza, Ruby, Loughnan et al.(2015).We expanded the original scale to include 8 dietary practices. These included “Meat lover,” “Omnivore,”“Semi-vegetarian,” “Pescatarian,” “Lacto-or Ovo-vegetarian,”“Strict vegetarian,”“Dietary vegan,” and“Lifestyle vegan”. Definitions for each category are provided in Appendix C. Procedure Preliminary procedures. Participants were tested individually. Having been welcomed into the lab each participant received an information sheet and consent form. All participants who arrived at the lab gave their consent. Each participant was seated on a stationary chair at a desk where the equipment stood. The experimenter explained that they&#13;
VISUAL ENGAGMENT WITH ANIMALS.7would load up the experiment and leave them to complete it in privacy. The experiment ran an initial calibration of the eye before moving through into the task information. Task information was presented across three separate screens which outlined for the participant what would be required of them (See Appendix D). Warm-up. Participants took part in two identical IVT. The first was framed as a warm-up. These warm-up trials ran automatically and did not require any participant action. Following task information, participants saw a screen which read “Warm-up” for 4000ms. The animal category was then announced (e.g. “Farmed Animals” “Companion Animals” or Endangered Animals”) and remained on screen for 4000ms. A centered fixation point appeared for 500ms before the first category animal image appeared for 10,000ms. It was during each 10,000ms image presentation that eye-movements were recorded. This same fixation point/image presentation routine was repeated three times over to cover all three images in each category. The order in which each animal category was presented was randomized across participants. Having completed the IVT for each animal category, participants were presented with a screen instructing them that the warm-up was now complete. This instruction screen was advanced manually by the participant. Video watching task. Following the first IVT, participants took part in the video watching task. The animal category was first announced and remained on screen for 4000ms. The appropriate video then played and was concluded with a blank screen lasting 3000ms. Participants were then made aware that the video had finished. Having manually moved the experiment along, the participant was next asked to indicate their current emotional state. They read: “Howpositive or negative do you feel right now?” and should select their response via mouse-click on a 7-point scale with the following range: “Really negative,” “Fairly negative,” “Slightly negative,” “Neutral,” “Slightly positive,” “Fairly positive” and “Really positive”. Again, this screen was manually advanced. The participant was next &#13;
VISUAL ENGAGMENT WITH ANIMALS.8presented with the statement “I contribute to the suffering of Farmed/ Companion/ Endangered animals” and was asked to indicate their response using the “Y” (Yes) and “N” (No) keys on the keyboard before pressing space bar to advance.  “I contribute to the well-being of Farmed/Companion/Endangered animals” was presented on the next screen and participants indicated their response as previous. Responses to these Y/N questions failed to record due to a programming error, and therefore will not be discussed further.The second IVT. As in the first IVT, participants saw a centered fixation point (500ms) followed by the first category animal image (10,000ms). Again, the REDn was programmed to record eye-movement during each of the 10,000ms image presentation. After each animal image the participant was then presented with the statement: “Thinking about how ___ (e.g. Cows) are slaughtered for their meat makes me feel outraged” and are again asked to indicate their response using the “Y” (Yes) and “N” (No) keys on the keyboard. This question was tailored to each animal category and target animal (see Appendix E for a list of each statements used). Next the participant read: “___ (e.g.,Cows) possess a capacity to suffer that is meaningfully similar to humans” and are asked to indicate their response Y/N as previous. This procedure was repeated three times over, once for each animal target. Due to a programming error, responses to these Y/N question were not recorded, and therefore they will not be discussed further. The entire procedure from the beginning of the video watching task to the end of the second IVT was repeated for each animal category, the order of which was randomized for each participant. See Appendix F for a visual representation of the experiment flow.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1664">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1665">
                <text>SPSS data </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1666">
                <text>Gregson2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1667">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1668">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1669">
                <text>SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1695">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1670">
                <text>Dr. Jared Piazza</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1671">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1672">
                <text>Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1673">
                <text>49 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1674">
                <text>ANOVA, correlation, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="69" public="1" featured="0">
    <fileContainer>
      <file fileId="23">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4517b206e143941069f6f7a9faebec5a.pdf</src>
        <authentication>66b07a82533d2587067e7f9f510521af</authentication>
      </file>
    </fileContainer>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1638">
                <text>How does metaphorical language affect individuals’ aesthetic perception in modern poetry: In the life span view</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1639">
                <text>Qishan Liao</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1640">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1641">
                <text>This study examined the relationship between the degree of metaphoricity and beauty perception as well as between cognitive load and beauty perception, by controlling for other possibly confounding variables such as familiarity and imageability. While previous research has shown that the variables of metaphoricity, familiarity and imageability influence beauty perception, no study investigate how the degree of metaphoricity and cognitive load influence beauty perception in poetic sentences reading. Therefore, this study aimed to bridge this gap.  Beauty rating scale and keypress experiment were conducted, involving 22 young adults and 18 elderly adults. Because of the collinearity among metaphoricity, familiarity and iamgeability, a new variable called interpretation of metaphors was used to explain the hypotheses in the present study. Rather than cognitive load, interpretability was the predictor of beauty perception in poetry sentences reading. Young adults’ beauty perception achieved to the highest point at novel metaphors, while elderly adults considered dead metaphors as the most beautiful stimuli. This study suggests that poetic sentences are generally perceived as more beautiful when its degree of interpretability is lower in young adults rather than elderly adults. These findings provide an initial implication for future longitudinal or neuroaesthetic studies to further the understanding between metaphorical language and beauty perception.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1647">
                <text>Beauty perception&#13;
Metaphoricity&#13;
Familiarity&#13;
Imageability</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1648">
                <text>This study has been approved by the Psychology ethics committee at Lancaster University on 24/04/2018. Besides, this study were preregistered in ‘AsPredicted’ website, and the number was 11034.&#13;
Participants.&#13;
The participants were 22 young adults between the ages of 18 and 30, and 20 elderly adults between the ages of 55-75. They were recruited from SONA systems, social media (e.g., Facebook advert). All young participants have not suffered from any learning disability (i.e., dyslexia) and they were native English speakers. However, two elderly participants confirmed that they had a history of dyslexia, so they were excluded. Finally, there were 22 young adults with a mean age of 21.64 years (SD=3.05) and 18 elderly adults with a mean age of 63.22 years (SD=6.07) have participated. Participants were required to give informed consent via an online consent form before completing the online survey, and they would fill in a paper version consent form before the keypress experiment. All participant would receive four pounds after finishing all experiments.&#13;
Materials.&#13;
Stimuli. A bank of 92 stimuli, was generated by a previous student who was previously supervised by Dr Francesca Citron. Partially sentences are excerpted from modern poetry. The remaining sentences were created by this student, inspired by other poetic works. Creating novel sentences was to decrease the deviation caused by participants being familiar with some stimuli. All stimuli were divided into five categories, and the degree of metaphoricity of these categories was increasing. The first one is the literal expression which has concrete and pragmatic meaning and it usually equal to its literal meaning. It is not part of the metaphorical language. The following category is dead metaphors – a kind of metaphor that lose its imaginative space because of frequent use (Punter, 2007). The third one is the conventional metaphor that is commonly used in everyday life, and it is highly related to the specific culture. The fourth one is novel metaphors which is usually unusual in everyday life and challenging for the layperson to understand. The last category is extremely novel metaphors which are the most abstract and challenging. The semantic category overlap of subject and predicate in these sentences is less obvious than other categories. Considering the potential fatigue of the elderly participants, the researcher randomly selected 50 stimuli from the original stimuli bank as experimental materials (See Appendix A). There were ten sentences for each category. All stimuli were given a specific code for identification in the analysis procedure. The creator of the stimuli bank has invited 85 participants to rate the degree of metaphoricity for each stimulus via a 7-point Likert scale (1 for the minimum and 7 for the maximum). The result has shown that the degree of metaphoricity was increasing as the original design (See Figure 1).&#13;
 &#13;
Figure 1. Scatterplot showing the trend of metaphoricity ratings of stimuli. The categories corresponding to the stimuli number as follows: literal sentences (1-14), dead metaphors (15-28), conventional metaphors (29-53), novel metaphors (54-75), and extremely novel metaphors (76-92).&#13;
Apart from metaphoricity, these stimuli have been tested on multiple sentence-level characteristics, including familiarity and imageability in the same group of participants. Briefly, all ratings were collected by asking participants to rate "how familiar is this sentence to you?" and "how easy is it to imagine this sentence?" on two separate 7-point Likert scales. These raw data would be used for analysis in this study.&#13;
Survey. Beauty rating scale was designed as a 7-point Likert scale via the online survey software ‘Qualtrics’. The scale included a digital version of the information sheet, consent form and debrief form, and it also investigated several basic information like age, biological sex and reading frequency (Appendix B). More importantly, the scale included the questions for checking whether the participants are British native speakers and whether they have had the history of learning disability (i.e., dyslexia) since these factors can influence the beauty ratings. In the formal test, 50 stimuli would be randomly presented to the participants through Qualtrics. Participants would see the poetic sentences, as well as the question ‘How beautiful is this sentence to you?" on the page. They need to give their responses by rating from 1 to 7(1 for not at all beautiful and 7 for extremely beautiful) for each sentence. &#13;
Experiment.  The researcher created a keypress experiment on ‘Presentation neurobehavioral system’ software. The material were identical to the online survey and included extra four filler sentences, five odd sentences, four questions related to the poetic stimuli. All new stimuli were generated by the researcher, but they would not be analysed eventually because of their functions (Appendix A). Filler sentences were used to let participants practice how to give their responses by the keypress. Odd sentences were unreasonable, and they were used to avoid the mechanically repeated responses. Similarly, some poetic stimuli would be followed by a question for checking whether participants have answered the question seriously. To ensure the randomness of the experimental materials, six versions of the experiment were created. Participants would be asked to read each sentence once at a time and to evaluate whether it was sensible for them by pressing a button (“F” for indicating “Yes” and “J” for indicating “No” via keyboard). Because wanting to avoid the habitual reaction caused by the participants being familiar with the traditional key press experiments, we also created six corresponding flipped version of the experiment. Overall, this experiment has 12 version, and they would be randomly allocated to the participant. Participants would take part in the experiment on the researcher’s computer, whereby the answer and the reaction time of each sentence would be collected by Presentation automatically and anonymously. &#13;
&#13;
&#13;
Procedure. &#13;
      Questionnaire. When the participants decided to participate in the project, the researcher would send an anonymous questionnaire link to the participants by e-mail. The questionnaire can be completed on any electronic device, and the participants could pause the questionnaire at any time when they need a break. After clicking the link, the participants would read the information sheet and the electronic consent form orderly to ensure that they understood the necessary information of the questionnaire and gave their consents. They then need to answer ‘check questions’ to check whether they were native speakers and whether they have a previous or current learning disability. Knowing the answers to these questions was to confirm that the participants were suitable for the questionnaire. Subsequently, the demographic information would be asked, and all answers would be kept confidential.&#13;
After, a brief instruction form would be presented to explain the basic operations of the questionnaire and some important terms (e.g., beauty) involved in the questionnaire. Then, 50 poetic stimuli which were varied in the degree of metaphoricity would be presented randomly, followed by a question after each stimulus: How beautiful is this sentence to you. Participants should give their responses by rating from the 7-point Likert scale (e.g., 1 for not at all beautiful and 7 for extremely beautiful). All answers would be automatically recorded by Qualtrics. &#13;
After completing all, the participants would read the debrief sheet to understand the purpose and the design of this questionnaire. Also, the references about this questionnaire and the contact information of the experimenter would be given. When the participants completed the questionnaire, they would receive an e-mail from the experimenter to make an appointment for the keypress experiment. Time for the experiment was usually one or two days after completing the questionnaire.&#13;
Keypress experiment. All participants were required to meet the experimenter personally to complete the keypress experiment. Before the experiment began, participants were required to sign on the paper version of the consent form. After that, the experimenter would verbally explain the operation of the experiment. Then, the experimenter would randomly select one of the twelve versions of the experiments and give the participant a unique code. Participants were asked to evaluate whether the sentences presented on the screen were reasonable at the time. When they think it was sensible, they need to press the button that represents ‘Yes’, and vice versa. When they need to answer the ‘Yes/No’ questions, the operation was the same. When the participants understood the operation, they would press F or J key to start the experiment. Before the poetic sentence was presented, there would be a white fixation cross in the center on the black screen, and the duration was 1000ms. Then, the stimulus would present on the screen and last for 8700ms. Usually, the participants need to give their responses during this period, and their reaction time was automatically recorded by the software. After the presentation of a stimulus, it would be followed by a blank screen that lasts for 300ms with a white jittered fixation cross before the next sentence/question was presented. If the subject answers the question at this time, their reaction time of this stimulus will be the reaction time during this period plus 8700ms. The font for all stimuli was 12, the font colour was white, and the background screen was black.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1649">
                <text>Lauren McCann</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1650">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1651">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1652">
                <text>data/excel.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1653">
                <text>Liao2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1654">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1655">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1656">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1657">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1642">
                <text>Francesca Citron</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1643">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1644">
                <text>Beauty Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1645">
                <text>22 young adults and 20 elderly adults</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1646">
                <text>Independent T-test&#13;
Perason's correlation&#13;
Partial correlation&#13;
Hierarchical regression&#13;
Simple regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="68" public="1" featured="0">
    <fileContainer>
      <file fileId="21">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6e55fa69336c955afd8161d2c2f4951f.doc</src>
        <authentication>4f750621696649cd87b16387c2a59e72</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1620">
                <text>Neural response to infant-directed speech: gamma band oscillatory activity in 4-month-old infants </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1621">
                <text>Marina Ciampolini</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1622">
                <text>2019</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1623">
                <text>Infant-directed speech is an ostensive signal preferred by infants over adult-directed speech. We studied infants’ neural response to auditory stimuli by measuring gamma band oscillatory activity over the frontal area of the brain in response to ostensive infant-directed speech and non-ostensive adult-directed speech. Two groups of 4-month-old infants were presented with the same auditory stimuli, but the two groups differed in terms of visual stimuli (inverted vs. upright faces), being our study a part of a broader research project. We investigated only the auditory portion of the trial. We found that, in the inverted face group, the activation to the ostensive infant-directed speech was significantly enhanced while, in the upright group, this outcome was not found. These findings support the use of gamma band oscillations in assessing the basis of social communication and establish infants’ early specialization in understanding communicative signals directed to them. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1624">
                <text>Infant-directed speech; neural response; EEG; gamma oscillation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1625">
                <text>Experimental Design &#13;
We used data that had already been collected for a broader study, designed for the observation of the influence of auditory stimuli on face processing. In the main experiment, a total of 36 four-months-old infants was divided into two groups that differed on the visual stimulus presented at the end of each trial. Immediately after an auditory stimulus in IDS or ADS, the first group was presented with inverted faces, while the second group was exposed to upright faces. Participants in both groups were thus exposed to the same auditory stimuli, just before being presented to the visual stimuli, that were different depending on the group. In this research we focused only on the auditory portion of the trial, where participants were exposed to IDS or ADS (Fig. 1).  &#13;
 &#13;
 &#13;
Figure 1. Representation of the complete trial presented to infants. In every trial, 18 infants were presented with upright faces, while the other 18 were presented with inverted faces. However, each infant was exposed to auditory stimuli in ADS or IDS, regardless of the visual stimulus. The green rectangle shows the portion of the trial analysed in this dissertation. &#13;
Participants &#13;
Infants were recruited from the Lancaster Babylab database. All were free of any known neurological, ocular or auditory abnormality and met the screening criteria of normal birth, born full term (gestational age &gt;37 weeks), in the normal weight range (&gt;2500g) and with an Apgar score of at least 8 at five minutes after the birth.  &#13;
In our study we focused on infants’ neural response to the auditory stimuli (Fig. 1). However, the distinction between the two groups was preserved in order to observe possible differences between them. The group exposed to inverted faces included 18 infants (5 females, age range 117 to 161 days, M= 135.61 days). Thirteen additional infants were excluded owing to an insufficient number of artifact free segments (n=10), sleep (n=1), and technical issues during the experiment (n=2). The group presented with upright faces included 18 infants (5 females, age range 115 to 171 days, M= 145.22 days). 17 additional infants were excluded because of an insufficient number of artifact free segments (n=14) and technical issues during the experiment (n=3). In the final datasets (N=36) were included infants that provided artifact-free EEG recording in at least 10 trials within each experimental condition.   &#13;
Stimuli &#13;
The auditory stimulus was the word “Hello” pronounced by a female voice using two different intonations: either IDS or ADS. The two words were recorded and edited with Audacity (v. 1.2.5) and Praat (v. 5.1) by setting a digitalization at 32-bit resolution and a sampling rate at 48 kHz. Both words were 850 ms long. The IDS stimulus had an average volume intensity of 61.86 dB, while the ADS stimulus had an average volume intensity of 61.50 dB.  &#13;
Apparatus &#13;
Infants’ behaviour was video recorded for the entire duration of the test by a remote-control video camera placed behind the monitor. A pair of computer speakers situated behind the monitor were used for the presentation of the auditory stimuli. The infants’ EEG was recorded at a sampling rate of 500 Hz using a 124-channel Hydrocel Geodesic Sensor Nets (Electrical Geodesic Inc., Eugene, OR, USA). &#13;
Procedure &#13;
Infants sat on their parent’s lap at a distance of 70 cm from a computer monitor. Each trial started with a dynamic fixation grabber at the centre of the monitor, for the duration of 2150 ms. Then the attention grabber stopped moving and the auditory stimulus (in IDS or ADS) was released by loudspeakers positioned behind the monitor and lasting 850 ms. The attention grabber remained still for an interval randomly varying between 200 and 400 ms. Then the grabber disappeared and the visual stimulus was presented for 1000 ms. A blank screen as an inter-trial interval between 1000 and 1200 ms was inserted between successive trials. Auditory stimulus in IDS or ADS were presented in a random order with the following constraint: no more than three successive trials of the same kind in a row. The trials were presented as long as the infants were willing to look at them. When they became fussy, the experimenters played a dynamic spiral together with an attractive sound. The session ended when the infant could no longer be attracted to the screen.  &#13;
EEG measurement and data analysis &#13;
The electrical potential was band-pass filtered between 0.3-100 Hz. The filtered EEG was then segmented into epochs including 600 ms before stimulus onset and 1400 ms following the stimulus onset for each trial. EEG epochs containing artifacts caused by body and eye movement were automatically eliminated, whenever the average amplitude of a 80 ms gliding window exceeded 55 µV at horizontal Electrooculogram (EOG) channels or 150 µV at any other channel. In addition to automatic rejection, each individual epoch was visually inspected for further epoch selection. When &lt;10% of the channels contained artifacts, the contaminated channels were replaced by mean of spline interpolation, while segments in which &gt;10% of the channels included artifacts were rejected. Infants exposed to upright faces contributed on average 17.5 artifact free trials to the IDS condition (range: 8 to 36) and 18.34 trials to the ADS condition (range: 9 to 39). Infants exposed to inverted faces contributed on average 20.89 artifact free trials to the IDS condition (range: 8 to 38) and 20.78 trials to the ADS condition (range: 10 to 39).  &#13;
In the artifact free segments induced gamma-band oscillations were uncovered through time-frequency analysis. These segments were imported into Matlab® and re-referenced to average reference through the free toolbox EEGLAB (v. 9.0.5.6b). The custom-made scripts collection WTools (available at request) was used to compute complex Morlet wavelets for the frequencies 10-90 Hz with 1 Hz resolution. A continuous wavelets transformation of single trials of EEG in each channel was performed, on 2000 ms long segments (600 ms pre-stimulus onset and 1400 ms after stimulus onset). The transformed segments were averaged for each condition separately. To remove the distortion in the time-frequency decomposition caused by convolution with the wavelets, 400 ms at each edge of the epochs were chopped, leaving a segment from -200 to 1000 ms around the auditory event. The average amplitude of the 200 ms pre-stimulus window was used as the baseline and was subtracted from the whole segment at each frequency. &#13;
Based on prior findings (Parise &amp; Csibra, 2013), we selected the scalp location over the forehead (the average of channels 3, 9, 10, 15, 16, 18, 22, 23, corresponding to Fp2, Fpz, Fp1, respectively, Figure 2), a time window from 200 ms to 600 ms, and 25 to 45 Hz frequency window.  &#13;
In order to verify that there were no significant differences between the accepted segment for each participant, a t-test between the average of accepted segments for each condition (speech) was performed and the same procedure was repeated for each group (face orientation). The mean amplitude was assessed by a repeated measure ANOVA with Speech (IDS x ADS) as a within subject factor, and Group as a between subject factor (upright x inverted). Paired-Sample t-tests were used for post hoc comparisons between the induced gamma-band oscillatory activity in response to IDS and to ADS. One-sample t-tests against 0 were used to assess whether the analysed gamma-band oscillatory activity differed significantly from the baseline.  &#13;
 &#13;
Figure 2. Sensor layout for the Electrical Geodesics Inc. (EGI) 124-channel hydrocel sensor net, showing the locations of the electrodes under study (circled in green), averaged for measurement of the oscillatory activation.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1626">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1627">
                <text>Excel files; Matlab files; SPSS files. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1628">
                <text>Ciampolini2019</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1629">
                <text>John Towse</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1630">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1631">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1632">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1633">
                <text>Eugenio Parise </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1634">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1635">
                <text>Cognitive; developmental </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1636">
                <text>36 infants </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1637">
                <text>Anova; t-tests </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="64" public="1" featured="0">
    <fileContainer>
      <file fileId="51">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d2dc5985e57b07e35905e64acb47b7b4.doc</src>
        <authentication>3370ed59d929ffce6ca5d977ec62bb7f</authentication>
      </file>
      <file fileId="52">
        <src>https://www.johnntowse.com/LUSTRE/files/original/99408598e35363745a56c58e81430f29.doc</src>
        <authentication>628eb1ba4a73e232e13333647109334e</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1558">
                <text>Assessing Inference Making in Listening Comprehension in Children in Special Education</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1559">
                <text>Rebecca Hindle</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1560">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1561">
                <text>Successful listening comprehension involves making accurate inferences to interpret the meaning of a story. We assessed inference making in listening comprehension of children in special education in years 4, 5, and 6 (n=12). Children listened to short stories and answered questions to assess local and global coherence inference after each story. Analysis of variance (ANOVA) revealed no significant main effects for children’s first responses for presentation type (whole, segmented) and inference type (local, global). However, after children had received prompts a significant main effect of inference type was shown with children performing better on global than local coherence inferences. Correlational analysis revealed no significant correlations between IQ and inference type but there was a stronger correlation between verbal IQ and inference type than non-verbal IQ and inference type. An independent t-test revealed no significant effect of diagnostic group on IQ or inference type but children in the Autism group performed better than children in the MLD group on both IQ measures and the MLD group scored better on both inference types. We conclude that inference type is important to consider when setting and asking comprehension questions along with the use of prompts to portray and assess children’s full comprehension ability. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1562">
                <text>Developmental Disorders</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1563">
                <text>&#13;
Participants&#13;
	The participants were 12 children from years 4, 5 and 6 aged between 8 and 11 (N=12, 3 girls and 9 boys, M=9.67, SD=0.99) from a special needs school in the North West of England. All children had a statement of special educational needs including; Autism, Foetal Alcohol Syndrome, Moderate Learning Disability, Nonans Syndrome, Fetal Vulprate Syndrome and Speech and Language Impairment. All children were verbal with English as their first language. Consent was provided by parents/ carers, the Head of School and each class Teacher. &#13;
Measures&#13;
	IQ Task&#13;
	The WISC IV was used to determine children’s IQ levels. Children completed one verbal and one non-verbal measure of IQ. The verbal measure was a vocabulary task, children were first shown pictures of items and asked what this is, progressing onto words asking, what does this mean? Children could score either 0, 1 or 2 points depending on the accuracy of their definitions according to the WISC IV manual. There were 36 items, increasing in difficulty, and testing stopped when children answered 5 questions incorrectly in a row. The non-verbal measure was a block design task, this comprised of 14 items starting with simple designs progressing to more difficult designs. Children had to copy patterns either demonstrated by the experimenter for the first 3 patterns or presented in picture format for the following items. There were time constraints for each pattern starting with 30s progressing in length for the more difficult items to 120s. Once children had failed to complete 3 patterns in a row, testing ended.&#13;
Listening comprehension task&#13;
	The listening comprehension task was taken from Freed and Cain (2016) devised by the Language and Reading Research Consortium (LARRC) (2015). The full set of materials comprised 6 short stories however, for the current study only 4 stories were used: Grandma’s Birthday, The Game, New Pet and A Family Day Out. The stories were all topics appropriate to this age group. There were 8 questions paired with each story assessing both local and global coherence inferences; 4 of each. With questions either asked throughout the stories (segmented format) or at the end of the stories (whole format). In 2 of the sessions, stories were presented in a whole format and in the other 2 sessions, the stories were presented in a segmented format. All the stories were pre-recorded by Freed and Cain (2016) and delivered on PowerPoint presentation on the researcher’s laptop to ensure consistent delivery of the stories regarding pace and word emphasis. All stories were available in a whole and a segmented format. The format in which children listened to the stories was counterbalanced based on children’s IQ levels from low to high.  &#13;
•	Whole story format. Children listened to the full story and at the end were asked 8 comprehension questions. The delivery of each whole formatted story followed the same format. &#13;
•	Segmented story format. Children listened to the story in 5 segments. After each segment the child was asked either 1 or 2 questions with 8 questions in total. The delivery of each segmented story followed the same format. &#13;
	The average length of the story was 157 words, there were no pictures included in the PowerPoint which the story recordings were presented on. This was to avoid children using the pictures to help them answer the questions. Children were provided with verbal prompts if incomplete answers were given to direct them to the correct answer. If children were still unable to answer full knowledge checks were provided, see Table 1. All prompts were pre-written to ensure all children received the same level of prompting.&#13;
Procedure&#13;
	Pre-test&#13;
	The IQ assessments were implemented individually in a quiet room in two separate sessions. Each session lasted between 10 and 15 minutes depending on how many questions/ trials they completed. First children completed the vocabulary test then in a separate session completed the non-verbal measure, block design. &#13;
Main assessment &#13;
	Children were presented with 4 short stories on 4 separate occasions, each story paired with 8 questions. Each story had to be completed in a separate session due to the attention and engagement levels of the children being tested. Each session lasted approximately 10 minutes depending on children’s accuracy and speed of answering. The procedure was explained to the children at the beginning of each session using a script to ensure consistency. They were informed that they would either be asked questions throughout the story or at the end of the story. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1564">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1565">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1566">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1567">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1568">
                <text>La1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2133">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2134">
                <text>Hindle2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2135">
                <text>Ellie Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2136">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2137">
                <text>Professor Kate Cain</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2138">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2139">
                <text>Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2140">
                <text>12 Participants (9 boys and 3 girls- aged between 4-11)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2141">
                <text>ANOVA&#13;
t-test&#13;
Correlation</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="63" public="1" featured="0">
    <fileContainer>
      <file fileId="53">
        <src>https://www.johnntowse.com/LUSTRE/files/original/6706f99fb62f6749b7c0d33bae37059f.pdf</src>
        <authentication>38f45aae780ada036b447d77607c2a80</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1552">
                <text>Investigating the effects of dimensionality and referent variability on word learning in autism and typical development.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1553">
                <text>Fiona Smith&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1554">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1555">
                <text>Dimensionality, referent variability, word learning.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2142">
                <text>The ability to learn words from pictures could give children another forum to develop&#13;
their lexical understanding and vocabulary. This is particularly important for children&#13;
with developmental disorders such as Autism. This research investigated how word&#13;
learning processes (referent selection, retention and generalisation) in autism and&#13;
typical development are influenced by learning from pictures and objects, including&#13;
single and multiple exemplars of symbols. The participants in this study were 16&#13;
typically developing children, M age=3.68, the TD group was composed of 7 males&#13;
(43.75%) and 8 females (56.25%). And 16 children diagnosed with ASD, M&#13;
age=9.37, 8 males (50%) and 8 females (50%). Participants looked at pictorial and&#13;
object referents. This was to differentiate whether there was a preference in word&#13;
acquisition and retention, depending on the structure of the stimuli. It was expected&#13;
that word referent selection, retention and generalisation would be more accurate in&#13;
the object condition compared to the picture condition, as participants would not be&#13;
relying of picture-word-associations. Participants also examined words paired with&#13;
either single or multiple exemplars of referents, to determine whether multiple &#13;
exemplars of shaped matched referents would promote shape-based generalisation&#13;
in the ASD group, which has been shown to be impaired (Hartley and Allen, 2014).&#13;
It was expected that retention would be superior when learning directly from objects&#13;
in both the ASD and TD groups, which was found in this research. We also&#13;
anticipated that labelling from multiple exemplars, rather than single exemplars,&#13;
may scaffold more consistent shape-based generalisation. We found that referent&#13;
selection was more accurate in both groups in the multiple exemplar condition&#13;
compared to the single exemplar condition. The implications of this research are&#13;
that we can further understanding of how symbols or objects benefit word learning,&#13;
retention and generalisation in ASD or TD children. And whether there are any&#13;
cognitive differences in the ASD and TD groups when it comes to word learning&#13;
processes. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2143">
                <text>Participants&#13;
The participants in this study were 16 minimally verbal children with ASD (M age =&#13;
10.42 years, SD = 3.29) and 16 typically developing children (M age = 3.64, SD =&#13;
1.64).&#13;
Children with ASD were recruited from the specialist schools Dee Banks School in&#13;
Chester, and Hinderton School in Ellesmere Port. Typically developing children were&#13;
recruited via opportunity sampling, via the social media platform Facebook through&#13;
advertisement. &#13;
All the children with ASD received their diagnosis from a qualified clinical or&#13;
educational psychologist. This was obtained using standardised instruments (i.e.&#13;
Autism Diagnostic Observation Scale and Autism Diagnostic Interview—Revised;&#13;
(Lord, Rutter &amp; Le Couteur, 1994, Lord, Rutter, DiLavore &amp; Risi, 2002) and expert&#13;
judgment. Clinical diagnosis was confirmed for children with autism using the&#13;
Childhood Autism Rating Scale (CARS; Schopler, Van Bourgondien, Wellman &amp;&#13;
Love, 2010), which was completed by a class teacher (Raw Score M score = 37.26,&#13;
Raw Score range = 27 – 53.5). The ASD were tested for non-verbal vocabulary using&#13;
the British Picture Vocabulary Scale (BPVS; Dunn, Dunn, Whetton, &amp; Burley, 1997),&#13;
which was conducted by the experimenter. Mean receptive vocabulary of children&#13;
with autism was years 2.84 (range = 6 years – 2 years 4 months).&#13;
Some of the children diagnosed with ASD who participated in this study were current&#13;
PECS-users with impaired expressive language skills. Most of the children with ASD&#13;
who participated in this study were functionally non-verbal (no spoken words),&#13;
although, some produced speech of 1–2 words in length (however, much of this was&#13;
echolalia) and one child could speak some short phrases over three words in length.&#13;
Therefore, the sample was linguistically representative of children with ASD who&#13;
receive and may benefit from picture-based communication interventions. Participants&#13;
had 1–6 years’ experience of using PECS.&#13;
When recruiting the children diagnosed with ASD, the experimenter emailed&#13;
specialist schools, explaining the study and whether the school would be interested in&#13;
participating. When recruiting the TD children, advertisements were put on social &#13;
media platforms such as Facebook (see Appendix A). The information poster&#13;
instructed the parents to contact the experimenter via email if they were interested in&#13;
their child participating.&#13;
The study was approved by the Lancaster University Ethics Committee and informed&#13;
consent was obtained from parents before children were included in the study.&#13;
See Appendix B for completed and approved Lancaster University Ethics Committee&#13;
form&#13;
Materials&#13;
For the warm up test trials in all tests the participants were shown three familiar&#13;
objects (for example; dog, bus, chair), these were small laminated pictorial symbols.&#13;
In the picture, single and multiple exemplar conditions the participants were shown 12&#13;
laminated pictorial symbols, 6 familiar and 4 novel. The participants saw each novel&#13;
symbol once and the novel symbol twice. Participants saw the same named novel&#13;
symbols in the retention test trial, in this trial the named novel objects were shown to&#13;
each participant twice. In the generalisation test trial, the participants saw shape&#13;
matches (same object or picture, for example both would be paperclips) to the named&#13;
novel objects from the referent selection test trial and retention test trial, however they&#13;
were different colour variations (for example a red and blue paperclip). In the object&#13;
condition participants followed the same test layout and number of referents as the&#13;
other conditions, the difference being that the stimuli were actual objects compared to &#13;
pictorial symbols. The words for the familiar stimuli were gathered using the CDI&#13;
database (Fenson, Dale, Reznick, Bates, Thal, Pethick, Stiles, 1994) and appropriately&#13;
aged matched to the non-verbal age range of the ASD children and the chronological&#13;
age of the TD children (See Appendix C). The words for the novel stimuli were&#13;
picked from the NOUN database (Horst &amp; Hout, 2016), these were picked to all be&#13;
two syllables long and words to have different phonological sounds per set. In the&#13;
picture condition were, Gloop, Virdex, Akar and Teebu. For the novel words for the&#13;
object condition were, Fiffin, Tranzer, Brisp and Pentants. For the single exemplar&#13;
condition the novel word were, Tulver, Kaki, Jefa and Blicket. For the multiple&#13;
exemplar condition the novel word were, Zepper, Toma, Modi and Chatten (see&#13;
Appendix D)&#13;
Objects were obtained through the equipment assistant at Lancaster University and&#13;
purchased through amazon. Appendix E is an example warm up selection trial which&#13;
a participant saw, and the response form completed by the experimenter. Appendix F&#13;
is an example of a referents selection trial, which participant saw, and the response&#13;
form completed by the experimenter. Appendix G is an example of a retention test&#13;
trial a participant will have seen, and the response form completed by the&#13;
experimenter. Appendix H is an example of the generalisation test trial which a&#13;
participant saw, and the response form completed by the experimenter. All test trials&#13;
were pseudorandomised per participant per condition and trial. Therefore, while all&#13;
the participants will have seen the same number of familiar and novel objects or&#13;
pictures. And each picture or object will have had the same name per shape matched&#13;
object they will have been in a different order. Therefore, a different response form&#13;
was required per participant, for the change in referent location and set order. &#13;
Procedure&#13;
Prior to the children participating the parents received, the information sheet (see&#13;
Appendix I), and the consent form (see Appendix J). On the last day of experiments&#13;
the experimenter brought the debrief forms (see Appendix K).&#13;
Participants were test individually, in their schools for the children with ASD or in&#13;
their own homes for the TD children, and were always accompanied by a familiar&#13;
adult, teaching assistant or parent. The participants were seated at a table opposite the&#13;
experimenter; the materials were placed within reaching distance of the participants.&#13;
Children were reinforced throughout the session; correct performance was only&#13;
reinforced during the warm up trial. The first test examined the picture condition vs&#13;
the object condition, the second test examined single vs multiple exemplars. The tasks&#13;
were between participants, as they were examining the results of the TD group&#13;
compared to the ASD group, however for the analysis some within participants&#13;
analysis was carried out to determine accuracy between test conditions (e.g. picture vs&#13;
object). Each task always consisted of a warm up stage, referent selection trial,&#13;
distracter familiarisation trial, retention test trial and generalisation test trial. The test&#13;
trials were based on that done by Horst and Samuelson in 2008, with the extension of&#13;
the generalisation trial which was not included in the Horst and Samuelson (2008)&#13;
study.&#13;
Picture Condition vs Object Condition Tests&#13;
Warm Up Stage&#13;
Participants were shown three sets of three familiar objects, in the object condition, in&#13;
the picture condition participants were shown three familiar pictures. Participants&#13;
were asked to identify each in turn, the warm up objects or pictures were&#13;
pseudorandomised per participant, changing the order and location per participant per&#13;
condition. The pictures or objects were removed and reordered after each set, and the&#13;
participants response recorded.&#13;
Referent Selection Trial&#13;
Participants were shown four sets of stimuli (pictures for the picture condition and&#13;
objects for the object condition) the sets of stimuli were different per condition, each&#13;
consisting of two familiar items and one novel item, each set was shown four times,&#13;
the novel referent was shown twice and the two familiar referents once. The order and&#13;
location of the sets was pseudorandomised for each participant, the location of the&#13;
novel object was never in the same location twice consecutively, and a novel or&#13;
familiar object or picture was never requested more than twice consecutively. Sets&#13;
were not presented twice in a row.&#13;
Distractor Familiarisation&#13;
To control for novelty or familiarity preferences in the subsequent test trials, children&#13;
were shown all the novel objects that used in generalisation test trials. The new novel&#13;
objects were a different colour variation of a previously seen novel object, which was&#13;
named in the referent selection trial. Novel objects or pictures were shown against a&#13;
previously named novel objects or pictures, which was not a shape or colour match to&#13;
the new novel object. Objects or pictures were shown so one previous named novel &#13;
object was shown against a new novel object or picture. The objects were not shape or&#13;
colour matched, the objects or pictures were placed in front of the participant, they&#13;
were not asked to identify them just to “look”.&#13;
Retention Test Trial&#13;
Retention trials will assess children’s memory of the newly-learned word-referent&#13;
pairings. Participants were shown four sets; each set was shown twice with the target&#13;
object requested twice. The sets were made up of three named novel objects, names&#13;
were picked from the NOUN database (Horst &amp; Houst, 2016), each made up of two&#13;
syllables, objects or pictures were picked on the basis that participants items that&#13;
would be novel to them, for instance gym or plumbing equipment. Objects and&#13;
pictures which were not shape or colour matches to each other and were shown in the&#13;
referent selection test trial. The order and location of each object or picture per set&#13;
was pseudorandomised per participant per trial. The location of the novel object was&#13;
never in the same location twice consecutively, and a novel or familiar object or&#13;
picture was never requested more than twice consecutively. Sets were not presented&#13;
twice in a row.&#13;
Generalisation Test Trial&#13;
Generalisation trials will assess children’s extension of labels to new items.&#13;
Participants were shown four sets; each consisting of three objects or pictures, each&#13;
set was shown twice with the target object being requested twice. The objects or&#13;
pictures in the sets were shape matches to the objects or pictures shown in the referent&#13;
selection, and retention trials, but different colour variations. All the shape matched &#13;
objects or pictures were also colour matched to a non-shape matched object from the&#13;
previous conditions. The order and location of each object or picture per set was&#13;
pseudorandomised per participant per trial. The location of the novel object was never&#13;
in the same location twice consecutively, and a novel or familiar object or picture was&#13;
never requested more than twice consecutively. Sets were not presented twice in a&#13;
row.&#13;
Single vs Multiple Exemplars Tests&#13;
Warm Up Trial&#13;
Participants were shown three sets of three familiar pictures in both the single and&#13;
multiple exemplar conditions. Participants were asked to identify each in turn, the&#13;
pictures were pseudorandomised per participant, changing the order and location per&#13;
participant per condition. The pictures were removed and reordered after each set, and&#13;
the participants response recorded.&#13;
Referent selection Trial&#13;
Participants were shown four sets of stimuli, the sets of stimuli were different per&#13;
condition, each consisting of two familiar items and one novel item, each set was&#13;
shown four times, the novel referent was shown twice and the two familiar referents&#13;
once. In the multiple exemplar trial, two differently-coloured versions of each&#13;
unfamiliar object were named (one per novel trial for each set). The order of the sets&#13;
was pseudorandomised for each participant. The order and location of each object or&#13;
picture per set was pseudorandomised per participant per trial. The location of the&#13;
novel object was never in the same location twice consecutively, and a novel or &#13;
familiar object or picture was never requested more than twice consecutively. Sets&#13;
were not presented twice in a row. The order and location of the sets was&#13;
pseudorandomised for each participant, the location of the novel object was never in&#13;
the same location twice consecutively, and a novel or familiar object or picture was&#13;
never requested more than twice consecutively. Sets were not presented twice in a&#13;
row.&#13;
Distractor Familiarisation&#13;
To control for novelty or familiarity preferences in the subsequent test trials, children&#13;
were shown all the novel pictures that used in generalisation test trials. The new novel&#13;
pictures were a different colour variation of a previously seen novel picture referent,&#13;
which was named in the referent selection trial. Novel pictures were shown against a&#13;
previously named novel pictures, which was not a shape or colour match to the new&#13;
novel picture. Pictures were shown so one previous named novel referent was shown&#13;
against a new novel picture. The referents were not shape or colour matched, the&#13;
pictures were placed in front of the participant, they were not asked to identify them&#13;
just to “look”.&#13;
Retention Test Trial&#13;
Retention trials will assess children’s memory of the newly-learned word-referent&#13;
pairings. Participants were shown four sets; each set was shown twice with the target&#13;
referent requested twice. The sets were made up of three named novel objects, names&#13;
were picked from the NOUN database (Horst &amp; Houst, 2016), each made up of two&#13;
syllables, pictures were picked on the basis that participants items that would be novel&#13;
to them, for instance gym or plumbing equipment. Pictures which were not shape or &#13;
colour matches to each other and were shown in the referent selection test trial. The&#13;
order and location of each picture per set was pseudorandomised per participant per&#13;
trial. The location of the novel object was never in the same location twice&#13;
consecutively, and a novel or familiar object or picture was never requested more than&#13;
twice consecutively. Sets were not presented twice in a row.&#13;
Generalisation Test Trial&#13;
Generalisation trials will assess children’s extension of labels to new items.&#13;
Participants were shown four sets; each consisting of three pictures, each set was&#13;
shown twice with the target object being requested twice. The pictures in the set were&#13;
shape matches to the picture shown in the referent selection, and retention trials, but&#13;
different colour variations. All the shape matched pictures were also colour matched&#13;
to a non-shape matched object from the previous conditions. The order and location&#13;
of each picture per set was pseudorandomised per participant per trial. The location of&#13;
the novel object was never in the same location twice consecutively, and a novel or&#13;
familiar picture was never requested more than twice consecutively. Sets were not&#13;
presented twice in a row. In the multiple exemplar condition the generalisation test&#13;
trial introduced the shape matched referent in a third colour that was coloured&#13;
matched to a referent of a different shape matched seen in the referent selection or&#13;
retention test trial. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2144">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2145">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2146">
                <text>Smith2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2147">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2148">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2149">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2150">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2151">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2152">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1556">
                <text>Calum Hartley&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1557">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2153">
                <text>Cognitive, Developmental Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2154">
                <text>16 minimally verbal children with ASD and 16 typically developing children </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2155">
                <text>ANOVA, Correlation, quantitative, t-test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="61" public="1" featured="0">
    <fileContainer>
      <file fileId="57">
        <src>https://www.johnntowse.com/LUSTRE/files/original/ee6c9a9fb70a964519577d2b8a098680.doc</src>
        <authentication>dd2a4ec39b75345858daecc1f5050a4f</authentication>
      </file>
    </fileContainer>
    <collection collectionId="4">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="183">
                  <text>Focus group</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="184">
                  <text>Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1521">
                <text>Use This or You’ll Lose That: Investigating Appropriate Psychological Theories to Market the Bogallme Tracking System.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1522">
                <text>Elizabeth Wardman</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1523">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1524">
                <text>The Bogallme Tracking System is an anonymous ‘Lost and Found’ system which uses stickers with QR codes printed on them to facilitate the return of lost items. It is thought that the main motivations behind the purchasing of these stickers are fear appeal and loss aversion, as people fear losing their possessions and will do whatever they can to prevent this from occurring. This study aimed to investigate whether this is the case using focus groups consisting of primarily students - the target audience for this specific product. The research also explored Rogers’ (1962; 1976) Diffusion of Innovations Theory (DOI) in relation to this product as well as opinions regarding the product and brand. Findings suggested that all three of the above theories are relevant and useful in the development of this product and can be used to create an efficient marketing campaign whilst creating scope for further research which would benefit the development of the brand and product. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1525">
                <text>Marketing/Advertising&#13;
Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1526">
                <text>Methodology&#13;
Participants.&#13;
Sixteen participants took part in this study. Participants were recruited via opportunity sampling through various social media platforms and word of mouth. The age of participants ranged between 20 and 23. This age range was selected due to a market segmentation suggesting that over 50% of QR code users were aged between 18 and 34 and that 18 to 24 year olds were 36% more likely to scan them (14 Million Americans Scanned QR Codes on their Mobile Phones in June 2011., n.d.)&#13;
Materials. &#13;
The focus groups loosely followed a discussion guide (See Appendix D) which asked general questions corresponding to the product, brand and incentives as well as questions related to Fear, Loss Aversion and Diffusion of Innovations theory. The majority of questions within the Discussion Guide were open-ended as they encourage participants to express their views and opinions in full (Turner, 2010) and allow for any further elaboration. During the focus group participants were shown three potential names for the brand (Scannit, GlobalQR and the brand name Bogallme) and an example of the Diffusion of Innovations Model (Figure 1). Participants were each given prototypes of the product that they tested during the group and were allowed to keep these at the end of the study. &#13;
Procedure.&#13;
Focus Groups&#13;
Focus groups were used as the method of data collection for this study. Although focus groups cannot provide data as rich as that of individual interviews, they can allow for group discussions. These group discussions and interactions allow for comparisons between participant experiences and opinions which could otherwise only be inferred after proceedings with individual interviews (Morgan, 1997). &#13;
This study consisted of two focus groups which lasted approximately 60 minutes each. Within each focus group, eight participants sat facing one another around a circular table. After reading the information sheet and signing the consent forms, the focus group started with introductory questions to make participants feel more comfortable and able to voice their opinions. After this brief period, participants were asked questions which followed the discussion guide (See Appendix D), however elaboration was allowed and encouraged. Each participant was encouraged to answer all questions and to contribute to discussions as much as possible. Participants were also made aware that they did not have to answer anything that made them feel uncomfortable. Debrief sheets were handed out to participants at the end of each group and any further questions were answered.&#13;
Analysis&#13;
Both of the focus groups were audio recorded on an Edirol R-09HR recorder and then transferred to a computer so that they could be deleted from the device. Recordings were then transcribed verbatim using the app Audacity, with each participant being given an anonymous ID in case of withdrawal. From these transcriptions, thematic analysis was conducted using the software NVivo, which identified and inferred themes and opinions in order to draw conclusions regarding the discussed theories of Fear, Loss Aversion and Diffusion of Innovations. Other themes and inferences also came to light which will be outlined in the Results section. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1527">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1528">
                <text>Results&#13;
There were several overarching themes present in both focus groups which relate to the three discussed theories (Fear, Loss Aversion and DOI Theory) and the proposed areas for exploration, along with new themes which were not previously considered. In response to the second objective relating to participant motivations to buy and use the product, the main theme of ‘motivations’ was created to investigate motivations to buy and use the product. Under this theme came the categories ‘fear’, ‘loss aversion’ and ‘adoption’. Following this, further sub-categories were created for each category which each included ‘effective’ and ‘ineffective’. The ‘adoption’ category under this main theme also included the further sub-categories ‘explicit’ and ‘implicit’. The category ‘explicit’ was based on what participants said outright whereas the ‘implicit’ category was based on inferences and implications from the discussion. The next main theme was created in relation to the first objective which aimed to explore brand and product opinions and was named ‘brand ideas’ and contained the categories ‘name’, ‘product idea’, ‘incentives’ and ‘other opinions’. For the final and arguably most significant objective, the main theme of ‘development’ was created which contained the categories ‘audience’, ‘barriers’ and ‘ideas’ which aimed to assist in making informed suggestions as to how to proceed with product development. &#13;
Brand Name&#13;
The opinions relating to the brand name were very clear: participants did not like it. After being presented with three options of a possible brand name with no previous knowledge not one participant deemed ‘Bogallme’ appropriate for the product. Not one participant worked out that the word ‘Bogall’ was an anagram of the word ‘Global’ and the majority of participants chose the name ‘Scannit’ as the most appropriate for both the product and the brand. Many participants also had trouble in pronouncing the brand name correctly and it was pointed out in the first group that some individuals may have trouble reading it.&#13;
“It’s not compatible with my dyslexia that one! Not at all.” (PL: Age 22)&#13;
“The other two also worked like internationally, you’d have to think about that as even as people who speak English we didn’t get that.” (SD: Age 22)&#13;
Participants in both groups suggested that the name seemed quite childish and was trying too hard to be ‘down with the kids’ instead of being marketed at their age range. Another general consensus regarding the brand name was that it sounded similar to ‘Boggle’, the famous children’s board game, which again gave it a childish theme. &#13;
“It’s like the game Boggle you used to play when you were a kid.” (GP: Age 22)&#13;
Overall, it seems apparent that the brand name could have detrimental effects for the future development of the product.&#13;
Product Idea&#13;
Despite the brand name, after reading the product description, participants liked the concept of the product and agreed that it was something that they would use. &#13;
“I need this in my life (laughs)” (EG: Age 22)&#13;
The suggested uses for the stickers included: phones, keys, laptops, passports, luggage and notebooks. Participants said they were more likely to use the service in its current state (using Safari or another web browser) as opposed to downloading an app. However, some participants did have concerns surrounding the legitimacy of the product and would be wary when asked to fill in their details on the website. In terms of pricing, ideas of how much participants would pay for one sticker ranged from £1 to £10 with some participants suggesting that they would prefer to pay for a subscription service. The suggested subscription service consisted of paying a yearly fee for a certain number of stickers.&#13;
“Yeah you could subscribe for like a year and you get five stickers and you could use it on whatever you want” (RD: Age 22)&#13;
Despite this suggestion, many participants still disliked the idea of a subscription service and compared it to services such as Amazon Prime which continues to charge you if you forget to cancel it. As participants were all students or graduates, most liked the idea of paying per sticker best as it was affordable and not tying. However, another subscription idea came to light when participants were discussing potential problems with people forging the stickers. It was suggested that a subscription would include unlimited stickers and you would instead be paying to use the service as a whole. This would stop people from forging stickers because it would not be necessary once payment had already been made.&#13;
“Unless, if you do have a subscription then surely you’d be paying the same amount anyway no matter how many… so why would anyone copy theirs.” (GP: Age 22)&#13;
The issue of forging was quite a prominent topic within the second focus group. They suggested a variety of ways to overcome this: customisable stickers, laminated stickers and the creation of a unique QR code similar to that of Snapchat or Messenger. The idea of customisation was also popular in the first group. Several participants from this group said that they wouldn’t put the sticker on their mobile phone as it is currently for aesthetic reasons. They did however state that if the stickers came in different colours or were customizable, that they would be much more likely to purchase the product. &#13;
“I’d say make them customisable. If you could design your own stickers that would be… To match your phone case you could be like ‘ooh I’ll have it black with rose gold’ and then it would match and look cute” (GP: Age 22)&#13;
These participants did still agree that they would put the stickers on items other than phones, such as keys and passports, as it is not as important to participants for these items to be aesthetically pleasing. Stemming from this, the use of the stickers for travelling purposes was discussed in detail. Participants in the first group all agreed that it would be a useful addition to travelling supplies as the stickers could be placed on passports and luggage items. This was a very popular idea with the group for a number of reasons. Firstly, a passport doesn’t have the same sell-on value as a mobile phone, so you’d be much more likely to have it returned to you. Another suggested reason was the speed of having the item returned to you. If you are travelling across several different countries and using many different transportation methods, it may be difficult to continue without documents such as your passport and so a speedy return is very important. The final reason was that people often buy new products and innovations for when they travel due to excitement.&#13;
“You’re just looking for stuff to buy when you’re going travelling as well,  like ‘what do I need, what do I need’ so yeah I think that would work quite well.” (KR: Age 23)&#13;
&#13;
Fear and Loss Aversion&#13;
When asked how they would feel if they lost an item, most participants described feelings of stress and anxiety along with anger. Not all participants had the experience of losing an important item, but all at least had a friend or family member who had had this experience. Participants suggested that the feelings they experience when losing something would make them want to return an item and that they would be more likely to return an item of personal over financial value. &#13;
One of the main advantages of the product was discussed when participants compared the product to insurance. It was suggested that the product was a cheaper alternative that, although return is not guaranteed, is better than no back-up at all. In terms of product development, these findings suggest that there is potential to work with an insurance company to effectively market the Tracking System.&#13;
“It’s kind of like an insurance isn’t it? Like for your phone so… I’d pay like a tenner if it was a one off because people pay, I don’t know, I think mine…well I don’t pay insurance lol but I think it’s like sixty pounds” (AB: Age 22)&#13;
The time-saving of the product compared to insurance also produced positive comments about the product as it was explained how long it takes for an item to be replaced through insurance and how much effort this can be.&#13;
“Also, insurance is like an effort, like you have to file a claim and then it takes ages for them to get it back but if you could just like message someone you like might get it today. It’s easier” (TM: Age 20)&#13;
Another comparison to insurance was made in terms of the personal value of possessions. When discussing phones, participants pointed out that they’d prefer their original phone returned over a new phone of the same model as their original phone has all their photos, music and original settings on it which can often be difficult to retrieve if lost. &#13;
“(Be)cause you’ve got your photos and everything…like everything is set up on your phone in the way you like it. I hate setting up a phone when you first get it and you have to download everything and set it back up again.” (GP: Age 22)&#13;
Participants in the first group felt so strongly about the insurance aspect of the product that one attendee suggested that the brand partner up with a phone company and sell the product as an add-on for phone contracts. &#13;
“You need to have a partnership with like a phone company or something so when people start getting new phones and upgrades, say you partnership with O2 and you have it as part of your package on your phone or something.” (DF: Age 22)&#13;
&#13;
Incentives&#13;
The majority of participants stated that they would not require an incentive to use the service and to return an item and that empathy alone would be enough. Participants also suggested that the gratitude of the person who had lost the item could contribute towards them returning it. Some suggested that an incentive could add extra persuasion however it was quickly pointed out that there would be issues with monitoring any incentives. Examples of incentives discussed included: a lottery, money, and a points system whereby points could be collected to go towards a discount or a cash reward. Participants admitted that some of them would be likely to abuse the incentive as there would be no way to monitor whether people are actually finding items or are just working together with friends to make some money or have more chance in a lottery. Overall it was decided that any incentive would either be abused or would not encourage someone who was unlikely to return the item to return it. &#13;
“Yeah it would’ve been such a good idea saying five returns gets you a free sticker but people literally will just get each other’s items and be like oh” (BC: Age 22)&#13;
However, it is quite naïve of participants to expect all individuals to return items via the service with no incentive. They made good points surrounding the potential abuse of incentives, yet the use of incentives is not something that should simply be ignored because of this potential hurdle. It would be best to suggest plausible alternatives, such as the individual who lost the item having to pay an incentive to the returner in order to retrieve their item. &#13;
Adoption&#13;
When presented with the Diffusion of Innovations Model, all participants initially suggested that they would personally be in the centre of the model between Early and Late Majority or in the Late Majority. However, after asking what stage they thought they were at across different innovations such as iPhones and Apps this altered somewhat. From broader discussion it could be inferred that most participants would fit in the ‘Early Majority’ stage of the model as they would be more likely to buy the product if they could see it used successfully by someone else, but they also usually try new innovations earlier than the majority. &#13;
“I was probably an early majority. I’d say I’m between early and late majority.” (GP: Age 22)&#13;
“Yeah, I’d have to hear people like using it well, like see people all around using it” (RH: Age 22)&#13;
When questioned as to the type of person that would be situated in the first two stages of the model, there was a variety of answers. In the first group the most popular answer was people in an older age group, with many participants describing the habits and behaviours of their fathers. &#13;
“I actually feel like older people like my dad or someone, he’d totally buy into this” (EG: Age 22)&#13;
They suggested that due to the simplicity of the product and its purpose, this would be the first market to espouse. Many were surprised by their own responses to this question as they initially assumed that the product would be more popular with a younger audience. The second group also agreed on an older audience, with suggestions of ‘overprotective mothers’ buying the product to protect their children’s’ possessions. The second group also indicated that, whilst they didn’t think that students would be the Innovators or Early Adopters, businesses targeting students would still be very interested in the product. &#13;
“I think anyone who’s in the student-y industry. I reckon you could quite easily do this with like nightclubs. Anything to do with students people would want to get involved with.” (BC: Age 22)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1529">
                <text>Bogallmetrackingsystem2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1530">
                <text>Frances Jackson </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1531">
                <text>There is no license suggested for this work as far as the research is aware.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="1532">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1533">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1534">
                <text>Qualitative interview data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1535">
                <text>LA1 4YQ</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2216">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2217">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2218">
                <text>Marketing/Advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2219">
                <text>16 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2220">
                <text>Qualitative (Thematic Analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="60" public="1" featured="0">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1520">
                <text>temp</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="49" public="1" featured="0">
    <collection collectionId="4">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="183">
                  <text>Focus group</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="184">
                  <text>Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1394">
                <text>An Exploration of the Use and Effectiveness of Nature Imagery, Metaphor, and Symbolism in Advertising. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1395">
                <text>Konstantinos Perimenis</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1396">
                <text>2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1397">
                <text>Core participation of nature imagery, indoor scenery, visual metaphor, and literal image in the construction of commercials and advertising industry has been established through repeated research. The current study aims to deeper investigate regarding the role of two specific components of aesthetic communication (nature imagery, poetry) in advertising. Results suggested that between nature imagery and indoor scenery there was a significant preference to nature imagery. However, results from the comparison between visual metaphor and literal image indicated a more divided outcome with participants suggesting that both presented as equally appealing. Overall, our results suggest that nature imagery was established to be the &#13;
most significant component towards forming an appealing advertisement. We indicated that further research could investigate and highlight the effectiveness of other mediator components of aesthetics (verbal language, humour, music, etc) in advertising.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1398">
                <text>Nature imagery in advertising, symbolism in advertising, metaphors in advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1399">
                <text>&lt;p&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; In all focus groups, a digital voice recorder was used for further analysis. The first selected pair of ads, with indoor and outdoor imageries, was about Coca-Cola brand. At the first Coca Cola’s ad film, which was broadcasted for the first time in 2010, participants had the chance to watch two young people inside an overcrowded bus. Even if these two passengers were completely strangers, they finally broke the ice between them, thanks to an invisible Coca-Cola bottle. At the second Coca-Cola’s commercial, diversity in terms of gender, religion and race, within the United States of America, was presented. At the same time, the viewers were given the opportunity of admiring some of the most breathtaking landscapes in USA.&lt;/p&gt;&#13;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; The second selected pair of ads, in terms of connotative and denotative imageries, was about Smirnoff brand. At the Smirnoff’s connotative commercial, there were clear signs that its creators intended to show temptation and seduction.&amp;nbsp; From the beginning it was clear that the starring couple was meant to represent a modern day Adam and Eve. As the music picked up, snakes appeared from the bartender’s sleeves to help make an Apple Bite and the customers got up to dance in a fast-paced song. The bartender was leading ‘Adam and Eve’ to the apple flavour cocktail and the fast-paced music suggested that something big would happen if the drink was taken.&amp;nbsp; This also insinuated that the drink was so desirable that they would not be able to resist the apple drink. At the denotative one, there was a stylish, classy man that was just listing the values of Smirnoff vodka. The initial 40 advertisements were selected randomly from Coloribus.com (See Appendix H for full links) an online databases for commercials and advertisements.&lt;/p&gt;&#13;
&lt;p&gt;&lt;/p&gt;&#13;
&lt;p&gt;&lt;b&gt;Data analysis &lt;/b&gt;&lt;/p&gt;&#13;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Responses to the focus groups’ questions were thematically analyzed. &amp;nbsp;The current research followed the six step thematic analysis approach as described by Braun and Clarke (2006). Notes of detailed observation were used to generate and apply codes to the qualitative data and to identify potential themes, as the small sample gave us that opportunity.&lt;/p&gt;</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1400">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1401">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1402">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1403">
                <text>WAV</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1404">
                <text>Perimenis2018</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1405">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="1406">
                <text>LA2 0PF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1407">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1408">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1409">
                <text>Psychology of Advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1410">
                <text>For the purpose of advertisement selection a pilot group was conducted consisted of 3 participants. Following the advertisement selection, two focus groups were formed, 6 participants were included in the first focus group, and 7 in the second focus group. Participants recruited in the pilot group and both focus groups (N= 16) were students from Lancaster University (age range 22-28). Inclusion criteria required participants to be above the age of 18 and be able to physically attend the focus group. Participants of both focus groups were 5 males, 8 females, and participants consisted the pilot group 2 males, and 1 female.  </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1411">
                <text>Qualitative</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="48" public="1" featured="0">
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1358">
                <text>Recalling Memories of Childhood Bullying: Links Between Early Victimisation and Anxiety in Adulthood&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1359">
                <text>Jenna Rayner</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1360">
                <text>2014</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1361">
                <text>Objectives: This study investigated the relationship between retrospective reports of bullying (primary school, secondary school and general experiences of bullying) with social anxiety (SAD), generalised anxiety (GAD) and grit (perseverance). Method: Demographic information was obtained from participants (n=147) as well as measures from primary school bullying, secondary school bullying and general bullying experiences utilising the Retrospective Bullying Questionnaire (RBQ; Schafer et al., 2004). The Social Phobia Inventory (Connor et al., 2000) measured social anxiety in participants, the Penn State Worry Questionnaire (Meyer et al., 1990) assessed general anxiety and the Grit Test (Duckwoth et al., 2007) evaluated participant’s determination. Results: There was evidence that primary school bullying was associated with higher levels of GAD whilst higher levels of SAD were associated with general bullying experiences. There was no evidence to suggest that the individual difference measure of grit impacted upon anxiety for participants. The results support previous studies which have linked anxiety disorders in adulthood to earlier experiences of bullying</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="1362">
                <text>  	In the Retrospective Bullying Questionnaire (RBQ) (Schafer et al., 2004), there are a number of sections, three of which were used in this study. The first looks at bullying in primary school, the second at bullying in secondary school and the third section at general bullying behaviour.  The general bullying behaviour section concentrated on the long-term effects of any bullying the participants had experiences of in primary or secondary school. This section asked such questions as “Do you ever have dreams or nightmares about the bullying events?” and “Do you ever feel distressed in situations which remind you of the bullying event(s)?” (appendix A). &#13;
This questionnaire was subject to intensive pilot studies by Schafer et al. (2004) and insight was gained from the success of Rivers (2001) study which also utilised a retrospective measure.  Reliability of the RBQ was assessed in the Schafer et al. study, which found a good level of test-retest reliability (Spearman correlation coefficients, primary school r=.88, secondary school r=.87). &#13;
   	The Social Phobia Inventory is a 17-item self-report questionnaire (Connor et al., 2000) that screens for social anxiety disorder and assesses the acuteness of such a disorder. The measure has three subsections which evaluate key symptoms of SAD: fear of social situations, avoidance of social situations and physiological discomfort within social situation. Each item is rated on a scale from zero to four. Scores ranged from 0 to 68, and a cut off score of 19 or above distinguishes between healthy controls and SAD sufferers. The SPIN has previously demonstrated good internal consistency as well as suitable test-retest reliability.&#13;
   	The Penn State Worry Questionnaire (Meyer, Miller, Metzger &amp; Borkovec, 1990) is a 16-item questionnaire which has been considerably utilised in existing studies to measure generalised anxiety disorder in participants. This questionnaire has been shown to differentiate between different anxiety disorders, e.g. General anxiety sufferers score higher than phobics (Meyer et al., 1990). The scoring for questions 1, 3, 8, 10 and 11 were reverse scored for the analysis. Each answer is scored on a five point likert-type scale ranging from 1= not at all typical to 5= very typical. The scores could range from 16 to 80 where the average score in a “normal” student population was 49. The average score in a GAD population was 68 for men and women (Hawkins, 2008). &#13;
   	The Grit Test (Duckworth, Peterson, Matthews, &amp; Kelly, D.R. (2007) is 12-item questionnaire which considers how ‘gritty’ a person you are. It looks at how you face challenges as a person and what your reaction to them is. The scores are added up and divided by 12. The maximum score on this scale is 5 (extremely gritty), and the lowest scale on this scale is 1 (not at all gritty). This measure was included as a personality measure to explore if there are any links between what type of person you are, and whether this affects if you are bullied or not. &#13;
Procedure &#13;
   	Following the briefing sheet, participants received a consent form to inform them of the nature of the study, their participation requirements, and their right to withdraw should they so wish. Once consent was gained, participants were asked to provide some demographic information on the following: gender, age, educational achievement, relationship status, ethnicity and employment status. For the purposes of analysis, females were coded as 1, whilst males were coded as 2.&#13;
   	Questionnaires made up the materials for this research project. Once participants had completed these they were informed of the end of the study and given more insight into the nature of the study. Participants were also given helplines and details of advisory websites, where they could go if they felt they had been affected by the nature of the research. The information for two journal articles whose research has facets of the current research were given, so that participants could gather more information if they so wished&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="1363">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="1364">
                <text>Rayner2014&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1365">
                <text>Anamarija Veic</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1366">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="1367">
                <text>Data and a form </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="1368">
                <text>Dr Kathleen MCulloch</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="1369">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="1370">
                <text>  	A total of 167 adults participated in the study and were all informed of the nature of the research. Participation was voluntary and all participants completed the survey online via Surveymonkey. The sample was an opportunity sample, as the researcher posted links to her survey via Facebook, twitter and www.thestudentroom.co.uk (a site for students to offer advice and help to each other). Friends on Facebook reposted or shared the advertisement for participants in order to reach a wider audience. Once the participants followed the link to the survey on Surveymonkey, they were faced with a briefing note which explained the nature of the study, as well as their voluntary participation in the study (describing how the participant can withdraw from the study with no repercussions).  &#13;
   	From the initial sample of 167 adults, data from 20 participants were excluded due to the incomplete nature of the data. This left a total of 147 participants, 72% female (106 female, 41 male) with an age range of 16-63. Participants were predominantly Caucasian (95%).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="1371">
                <text>Correlational Analysis   </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="1376">
                <text>Social Psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
