<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=8" accessDate="2026-05-03T05:10:42+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>8</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="119" public="1" featured="0">
    <fileContainer>
      <file fileId="93">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e41ceedfeab654ddc688dcd34ee9e23a.csv</src>
        <authentication>118a1e65ad8ea8e41698f0cdca138337</authentication>
      </file>
      <file fileId="94">
        <src>https://www.johnntowse.com/LUSTRE/files/original/215933f28fe2df47cd7c39730d39dad5.csv</src>
        <authentication>4ad80f212ac97b3cc0b154f9c12f7894</authentication>
      </file>
      <file fileId="95">
        <src>https://www.johnntowse.com/LUSTRE/files/original/5608ca9c5fe099c705ea167d0d036936.csv</src>
        <authentication>d37289cd235af3b9f8f3bccecf8a7778</authentication>
      </file>
      <file fileId="96">
        <src>https://www.johnntowse.com/LUSTRE/files/original/59ad00d8ba92ab3752b9eea407e574bd.csv</src>
        <authentication>b0411d97dd20c96b87b841f0ef9e8925</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2575">
                <text>Examining the Effect of Anxiety on the Development of False Memory </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2576">
                <text>Mariyam Malsha Muneer</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2577">
                <text>8 September 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2578">
                <text>Up till the late 70s, people believed their memory worked in similar to a video-recorder, accurately collecting and storing every information seen and heard. This belief was brought to question after researchers started thorough investigation on memory, and found that in actuality memory is highly impressionable and prone to numerous errors such as the formation of false memories. There now appears to have been found many causes for the formation of false memories. However, limited to no research exists on the effect of generalized anxiety disorder (GAD) on formation of false memories. The present study aimed to investigate the effect of GAD on the development of false memories by using the misinformation effect paradigm. Confidence-accuracy calibration (CAC) was assessed as a secondary analysis. Participants (N = 100) were recruited through online means and took part in a 15-45-minute-long experiment involving neutral stimuli. The experiment consisted of a video of an event and were subsequently asked to read a text description with misinformation after partaking in filler tasks. Afterwards their memory of the original event was tested. Results demonstrate that GAD and false memory are not significantly associated. CAC analysis revealed that participants were relatively aware of when their memory had been distorted by providing low confidence ratings to more inaccurate items and higher confidence ratings to accurately recalled answers. Additionally, false memories created due to misinformation was significantly observed, though GAD did have any influence over this. To conclude, GAD does not contribute to the formation of false memories.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2579">
                <text>memory, generalized anxiety disorder, confidence-accuracy calibration</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2580">
                <text>A total of 100 participants were recruited and provided with an online link through social media sites, ages ranging from 18-50. Out of the recruited participants, 66 identified as females, 31 as males, two as non-binary and, one preferred not to say. The link begins with the consent sheet, and once the participants click to agree, they were then redirected to the start of the experiment.&#13;
Participant’s anxiety was tested by administering a standardized and validated tool, the Generalized Anxiety Disorder Questionnaire (GAD-7) (Spitzer et al., 2006), (see Appendix B). GAD7 has seven rating scale questions, and the participant’s anxiety was calculated by assigning scores of zero (not at all), one (several days), two (more than half the days), and three (nearly every day).  Samples questions include “worrying too much about different things?” and “becoming easily annoyed or irritable?”. For scores ten and above, GAD-7 has a specificity of 82% and sensitivity of 89% (Kroenke et al., 2007). Cut-off points for the scores are a score of five for mild anxiety, ten for moderate anxiety, and 15 for severe anxiety. For the present study, participants who scored nine and below were grouped under “low” anxiety, and participants who scored ten and above were grouped under “high” anxiety.&#13;
The stimulus set developed by Okado and Start (2005) were used for this study. Two neutral stimuli were obtained, and each stimulus consisted of 50 coloured digital images. These were compiled into a short video, with each image displayed for 300ms, and the whole video lasting 150s. Out of the 50 slides, 12 of them were critical, meaning these slides consisted of an item that would later be altered in the text description of the event, hence providing the misinformation. The two stimuli are summarized below.&#13;
Stimulus One is about a female named Rachel who was doing her work at home, then feels hungry and checks her refrigerator for food, sees that there is not much at hand, and so goes grocery shopping. She was seen viewing different aisles for grocery and sees a friend in there as well. She then pays the bill and takes the elevator back home and stores the food away. (See Appendix C for the critical images)&#13;
Stimulus Two is about a male student named Nicholas who was just seen leaving his classroom to go sit on a bench in the hallway, studying between classes and runs into three friends: a male (Henry) who displays his new shirt, another male (Frank) who wanted to know when an exam was scheduled, and a female (Stephanie) whose conversation was interrupted by a phone call. (See Appendix F for the critical images)&#13;
Text descriptions derived from Okado and Stark’s (2005) stimulus set were used for the present study. For both Stimulus One and Stimulus Two, 12 critical details from the original event were altered in the text description, with every other detail remaining true to the original event. To give an example of a critical detail, in stimulus One’s original event a woman was seen picking up two bananas, whereas in the text description it was written, “She started with the healthy items and picked up five bananas.”  (See Appendix D and G).&#13;
Recognition test involving three choice options derived from Okado and Stark (2005) were used for the present study. The test was composed of 18 detailed questions concerning the video presented at the beginning (the original event phase). Out of the 18 questions, 12 were critical questions (i.e., regarding the events that were changed in the text description), and six were control questions (i.e., regarding events that were consistent throughout the video and text description). After each question participants reported their confidence in their response on a scale of 0-100, where zero indicated not at all confident and 100 indicated extremely confident.&#13;
A sample critical question was, “In the fruits section, how many bananas did Rachel pick up?” Participants were required to choose one answer out of the three: (1) one banana (filler option), (2) two bananas (as seen from the original event’s video), and (3) five bananas (altered detail presented in the text description). Control questions were also akin to critical questions, e.g., “Where does Rachel put her shopping bags in the kitchen?” For answers: (1) on the counter (as seen from the original event’s video), (2) on the floor (filler option), (3) on the table (filler option). (See Appendix E and H).&#13;
The current research was designed as a 2x2x2 mixed factorial study. All participants had to complete all aspects of the experiment; henceforth, the memory accuracy for control and critical items were within-subject factors. The levels of anxiety (high and low) and stimulus (one and two), were between-subject factors. &#13;
Participants were tested individually online and were informed they are partaking in a study concerning memory and mood. The experiment was created online in Qualtrics, and upon viewing, participants are first required to consent. The consent sheet had also explained that the study is completely voluntary and participants can withdraw at any point. Subsequently, participants were to either watch stimulus One or Two (the two videos were set to view randomly), and a timer was set to ensure no skipping was allowed. Immediately afterwards, participants had to fill in few demographic questions pertaining to their age, education, and employment (see Appendix A). Afterwards, they were required to complete the GAD-7. These two questionnaires served as a filler task to ensure sufficient time to allow some memory decay between watching the video of the event and reading the text description of the event.  &#13;
Next, participants read the altered text descriptions of the original event shown in the video. Participants were unaware of the changes brought and were told to read the text descriptions which had described the events from the original video. Akin to the video, a two-minute timer was set to ensure participants do not skip the text descriptions. Thereupon, participants were diverted to a game of sudoku, where they would spend at least five minutes playing it. They were instructed that we were interested in knowing how individuals play games and so were not aware of the true nature of the game, which was to serve as a second filler task. Lastly, participants completed the recognition memory test, where they had to choose the correct answer out of the three response options and to indicate their confidence for each answer to assess the C-A relationship. CAC layout is relatively simple by computing the accuracy for each level of confidence. When perfect calibration occurs, it is a straight line with the decisions being made at each level of confidence are all correct. &#13;
Once completed, participants were thanked for their time spent on the experiment and presented with the debrief sheet explaining the true nature of the study The debrief sheet was provided with international and local numbers for people from different continents should they need to seek immediate assistance. Participants spent around an estimate of 15-45 minutes to complete the experiment.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2581">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2582">
                <text>Excel/csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2583">
                <text>Muneer2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2584">
                <text>Ellen Dimeck, Cati Oates</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2585">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2586">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2587">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2588">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2589">
                <text>LA1 4YZ</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2590">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2591">
                <text>Clinical</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2592">
                <text>A total of 100 participants were recruited and provided with an online link through social media sites, ages ranging from 18-50. Out of the recruited participants, 66 identified as females, 31 as males, two as non-binary and, one preferred not to say. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2593">
                <text>ANOVA&#13;
Confidence-accuracy Calibration</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="118" public="1" featured="0">
    <collection collectionId="2">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="179">
                  <text>Eye tracking </text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="180">
                  <text>Understanding psychological processes though eye tracking</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2560">
                <text>Infants' Awareness of Number: Innate Ability or Perceptual Bias?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2561">
                <text>Jessica Sparks</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2562">
                <text>07.09.2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2563">
                <text>In order to identify the origin of our understanding of numerosity and arithmetic abilities, it is essential that such abilities are measured in infants. In Wynn’s (1992) study, a case was made for an innate ability to perform arithmetic operation on small number sets as it was demonstrated that infants would look longer at displays that violated their expectations of number. However, research in the years following this seminal study cast doubt on this interpretation of infants’ behaviour. Other research has suggested that perceptual biases are at play, rather than infants possessing a symbolic understanding of number. To address the contrasting finding in this area of developmental research, this study set out to analyse preexisting data to investigate the factors that influence infants’ abilities to track objects over occlusion and to identify the most appropriate level of interpretation of this ability The present study recruited a sample of 32 infants across two experiments. Adapting the methodology from Wynn (1992), Experiment 1 measured looking time when an object was revealed to be missing from the display, violating infants’ expectation of presence. Experiment 2 measured looking time when an object was revealed to be in the incorrect position on the stage, violating infants’ expectation of position. It was found that infants violation trial had a significant effect on looking time and whether the object missing was the first or last to be placed had a significant effect on looking time in violation of presence conditions</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2564">
                <text>Addition, subtraction, Number, Object Tracking, object files, Infant perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2565">
                <text>Participants:  &#13;
In this study, participants were 32 infants aged 5- to 7-months, (M = 188.38 days, SD = 10.51, range = 175 – 218). Infants were 15 males and 17 females. 16 participants were used in each experiment. In Experiment 1, participants were 7 males and 9. In Experiment 2, participants were 8 males and 8 females. Participants in each experiment were matched based on age.  &#13;
Apparatus &amp; Stimuli: &#13;
The experiment took place in a dimly lit test room, with displays presented on a grey stage measuring 64cm wide by 40cm high and 31cm deep. An 8.5cm high black screen located 31.5cm behind the front of the stage was used to occlude the display by being rotated upwards. The display also consisted of a 30cm rotating platform that allowed different configurations of objects to be rotated rapidly. The objects used in this study were two 12.5cm high by 9.5cm wide toy hedgehogs that squeaked when squeezed. These toys were magnetic at the bottom.  &#13;
Procedure:  &#13;
Infants were sat in either a high seat or on a caregiver’s lap, 60cm from the front edge of the stage. In cases where infants were sat on a caregiver’s lap, the caregiver’s eyes were above the stage as to avoid them seeing the display and possibly influencing the infant’s behaviour. After gaze calibration to ensure the accuracy of eye-tracking measures, the procedure closely followed that of Wynn (1992) and Bremner et al (2017).  &#13;
Three pre-test (baseline) trials were presented initially. These resulted in the correct outcome of the operation as well as the two incorrect outcomes in counterbalanced order. The screen was lowered to reveal either one or two toys, depending on the trial, and the observer recorded where the infant looked on the stage. In terms of the location of the toys in trials, when one was presented, it was placed 7.5cm to the right of the stage’s centre. When two toys were presented, the second toy was placed 7.5cm to the left of the stage’s centre. Pre-test trials continued until the infant accumulated at least 2 seconds of looking time and looked away from the display for seconds or more. When this was achieved, the screen was raised and the same procedure was repeated for the displays for the other two outcomes.  &#13;
Test trials were administered in two blocks of four trials. The experimenter’s hand emerged at one side above the screen. The side at which the toy first appears was counterbalanced across participants. The toy squeaked to capture the infant’s attention and continued to squeak to maintain this attention as it was placed on one of the locations used during the correct outcome familiarisation trial. The experimenter then slowly withdrew their hand, clasping and unclasping the hand to show the infant that it was empty, and the screen was then raised to occlude the toy from the infant’s view. The time taken from the appearance of the toy to the withdrawal of the hand took approximately 5 seconds. The experimenter’s hand then reappeared above the screen from the opposite side of the display, holding an identical squeaking toy. Once the infant’s attention had been captures, the toy was placed in the other location used during correct outcome familiarisation trials. The hand was then raised and, again, clasped and unclasped to show the infant the hand was empty. The hand as then slowly withdrawn from the display. The screen was then lowered to reveal either the correct or incorrect outcome.  &#13;
In Experiment 1, conditions involved violation of object presence. In ‘added object absent’ trials, the screen was lowered to reveal the last object to be placed was missing from the display. In ‘original object absent’ trials, the screen was lowered to reveal the first object to be placed, present before the screen was raised, was missing from the display. In Experiment 2, conditions involved violation of object position. In ‘added object in wrong location’ trials, the screen was lowered to reveal the last object to be placed appeared in the centre of the stage rather than on the side of the stage in which it was placed. In ‘original object in wrong location’ trials, the screen was lowered to reveal the original object in the display appeared in the centre of the stage rather than on the side it was in before the screen was raised.  &#13;
These test trials continued until the infant had accumulated at least 2 seconds of looking tie and looked away from the display for 2 seconds or more. &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2566">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2567">
                <text>.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2568">
                <text>Sparks2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2569">
                <text>Julonna Peterson and Rebecca Mitchell</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2570">
                <text>open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2571">
                <text>Wynn's 1992 study</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2572">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2573">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2574">
                <text>Developmental </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2642">
                <text>Gavin Bremner</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2643">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2644">
                <text>Developmental</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2645">
                <text>32</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2646">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
    <tagContainer>
      <tag tagId="4">
        <name>infant perception</name>
      </tag>
    </tagContainer>
  </item>
  <item itemId="114" public="1" featured="0">
    <collection collectionId="3">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="181">
                  <text>EEG</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="182">
                  <text>Electroencephalography (EEG) is a method for monitoring electrical activity in the brain. It uses electrodes placed on or below the scalp to record activity with coarse spatial but high temporal resolution</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2491">
                <text>Effect of Attention and Noise on Echoic Memory as Indexed by the N1-Adaptation. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2492">
                <text>Ekenedilichukwu Tonia Osakwe</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2493">
                <text>08.09.2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2494">
                <text>There are numerous studies that support the notion that echoic memory is indexed by the adaptation of the N1 peak in auditory event related potentials (ERPs). Although the number research on the effects of parameters like noise and attention on the amplitude of the N1 is immense, to date there are no studies on the effect of these parameters on the adaptation of the N1. Here, I investigated the effect of noise and attention on the adaptation of N1, P2 and N1-P2. Secondary analysis was conducted on data collected from 33 participants in three conditions:  passive recording condition (participant listen passively to stimulus while staring at a fixation cross); attention/oddball conditions (participant were task with counting the deviating tones); and noise condition where the tones are presented in white noise. Within each condition, two Stimulus onset intervals (SOI): 1.7 s and 3.5 were used in separate stimulus blocks and the ratio R = M1.7s / M3.5s was used as a dimensionless measure of adaptation. My results found no significant effect of noise an attention on the amplitudes and adaption of the N1, P2 and N1-P2. I propose that the lack of effect on the adaption of the ERPS might be due to noise and attention having a scaling effect on all of the amplitudes equally so that adaption lifetime is not affected. As this is the first study of its kind, further research will be needed to gain a better understanding of how adaptation is affected by these two factors. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2495">
                <text>Attention, Noise, N1-adaptation, auditory sensory memory</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2496">
                <text>Participants&#13;
This project carries out secondary analysis on data from an EEG experiment with 33 human participants. The data  was received from supervisor, Patrick May.  The participants were all adult undergraduate and post graduate students at Lancaster University, with no self-reported hearing loss or neurological disorder. The experiment was approved by the research ethics procedures of the Department of Psychology, Lancaster University, and the participants provided written consent before the experiment began. &#13;
&#13;
Equipment and Procedure for EEG measurements&#13;
Three dry electrodes were attached at locations: Fpz, Fz, Cz. Reference and ground electrodes were attached to the right ear lobe. For this report, only the data acquired from the Fz location was used as this is the channel that recorded the best ERPs for all the participants. The participants were directed to passively listen to stimuli while staring at a fixation cross and moving and blinking as little as possible. The stimuli comprised of 500-Hz pure tones with a duration of 100ms, including 10-ms linear onset and offset ramps. The stimuli were presented in blocks of 100 isochronous stimuli. The stimuli were presented binaurally via Sennheiser headphones using laboratory laptop and MATLAB interfaced with the Enobio EEG device in a soundproof chamber. Data was collected in three conditions: baseline passive recording condition (participant listen passively to stimulus while staring at a fixation cross); attention/oddball conditions (participant were task with counting the deviating tones); and noise condition where the tones are presented in white noise. Withing each condition, two Stimulus onset interval (SOI): 1.7 s and 3.5 were used in separate stimulus blocks. The order of experiments were randomised across the participants. &#13;
Data Analysis&#13;
The data was passband filtered at 1-30 Hz and sectioned into epochs of single trial data. To remove artefacts (e.g., due to blinking) 15% of epochs with the largest absolute amplitudes were removed. Single trial epochs was then averaged to reveal the ERP. The average ERP in a 100ms time window immediately preceding stimulus onset was calculated and subtracted from the whole ERP (baseline correction). The N1 is not the only peak that shows adaptation in auditory ERPS. Although many of the research on adaption is focused on the N1 peak, different researchers have looked at other auditory ERP peaks in relation to adaptations such as the P2 and P3 peaks. In fact, Lanting et al. (2013)  found that the P2 was more very strongly affected by adaption than the N1. In addition, the peak-to-peak difference between the N1 and the P2 has been previously used to estimate adaptation in several studies as it provides a more reliable measure of activity in auditory cortex because as it has the advantage of not being dependent on the baseline activity which can be noisy (Lanting et al., 2013; Lavoie et al., 2008; Muller-Gass et al., 2008). Because of this, both the N1 and the P2 peaks were identified - the N1 was identified as the peak negativity at around 100ms and P2 peak positivity at around 200ms. The peak-to-peak difference between the N1 and the P2  was calculated and the N1 and P2 amplitude as well as the difference between the N1 and P2 amplitude was used to estimate the lifetime of adaptation. Statistical data analysis was conducted using Analysis of Variance (ANOVA). Specifically, three one-way (condition) and three two-way (SOI x condition) repeated measures ANOVAs was conducted of the N1, P2 and the difference between the N1 and P2 amplitudes and amplitude ratios respectively. &#13;
&#13;
Calculating the lifetime of adaptation (τ)&#13;
The recovery time constant for adaptation is usually calculated by fitting an exponentially saturating function to peak amplitudes plotted across SOIs (Lu et al., 1992). This curve is characterized by  as well as by two other fitting parameters: asymptotic magnitude and crossing point on SOI axis. The parameter  determines the steepness of the magnitude curve: the smaller its value, the quicker the curve approaches the asymptote (i.e., levels out) as SOI is increased. The SOIs where this levelling out has occurred represent stimulation where the silent period between two consecutive stimuli is large enough for adaptation to have died away. Therefore,  expresses the lifetime of adaptation: with low values, the curve levels out to its maximum value quicker; with high values, the amplitude rises slower as a function of SOI, meaning that adaptation is strongly present in a larger range of SOIs.&#13;
For fitting the exponential function reliably, a large number of SOIs should be employed, and the largest SOI should measure approximately 10s to ensure that adaptation has died away. Coupled with the requirements of data quality (large number of stimulus repetitions), this means long measurement times. In this experiment, this was bypassed by noting that the ratio between the magnitudes measured at two different SOIs is proportional to . Expressing the magnitudes of the brain responses measured at SOIs 1.7 s and 3.5 s by M1.7s, and M3.5s, respectively, the ratio R = M1.7s / M3.5s was used as a dimensionless measure of  and adaptation lifetime. The smaller R is, the shorter adaptation lifetime is. R was calculated separately for each participant for each of the experimental conditions and for each SOI. In addition, R was also calculated separately for the N1 and P2 peaks as well as the difference between these peaks. Note that the actual adaptation lifetime cannot be estimated by the use of this method.&#13;
&#13;
Results&#13;
18 participants’ data did not show identifiable ERP responses and were thus discarded from analysis. The ERPs obtained from the final sample of 15 were plotted as shown in Figure 1 for each participant. The means and standard deviations were then calculated for the identified N1, P2 and the difference between the N1 and P2 for each SOI and condition as shown in Table 1. Seeing as there is such a large variability across the conditions, it is predictable that no statistical differences were found by the ANOVA. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2497">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2498">
                <text>data/r.csv&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2499">
                <text>Osakwe2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2500">
                <text>Emily Dreyer&#13;
Paige Durnall</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2501">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2502">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2503">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2504">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2505">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2535">
                <text>Patrick May</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2536">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2537">
                <text>Neuroscience, Neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2538">
                <text>33 to start, 18 were removed so final number is 15</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2539">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="113" public="1" featured="0">
    <fileContainer>
      <file fileId="98">
        <src>https://www.johnntowse.com/LUSTRE/files/original/4e2ce0b482bf0e6255a2b135dd3c4ef9.csv</src>
        <authentication>36a7395fbd0a52c6c6bc196d01f764c9</authentication>
      </file>
      <file fileId="101">
        <src>https://www.johnntowse.com/LUSTRE/files/original/ca80a766cdc965260b9e412e77ce5938.doc</src>
        <authentication>b325147f4b4d0e613dbe5a64177ef440</authentication>
      </file>
    </fileContainer>
    <collection collectionId="12">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="1136">
                  <text>linguistic analysis</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2471">
                <text>How the correlation of Istagram tweets readability and brand hedoism affect audiecne engagment </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2472">
                <text>Jiehong Wu</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2473">
                <text>Sep 8th 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2474">
                <text>Social media marketing is increasing in importance and more and more brands are embracing social media to increase their brand reach and communicate with their audience. However, there is still little empirical research on how brand message features affect consumer engagement. This study focuses on the impact of readability as an influence on consumer engagement while also noting that the effect of hedonic value of a brand may potentially moderate the level of audience engagement. An experiment based on a sample of 20 of the 100 brands covered by Forbes Media was conducted for this study. In total, a sample of 400 Instagram tweets were collected and analysed for their text readability and audience engagement. Still, the results did not indicate a significant interaction between readability and engagement. A careful analysis of the difficulties and shortcomings encountered in this experiment provides some insights for any subsequent research on the readability of short-form communication by brands.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2475">
                <text>Readability, Brand hedonism, readbility formula, audience engagment</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2476">
                <text>Research Question &amp; Hypotheses&#13;
The research question for this study is: can the readability of tweets influence the level of audience engagement?&#13;
&#13;
As readability increases the perception associated with processing fluency (Rennekamp, 2012), the ability to process information fluently makes the target message more appealing to the audience, and visual fluency in processing information can also increase people's perception of the processing target (Novemsky et al., 2007). Language that can be processed fluently also enhances consumer perceptions (Lee &amp; Aaker, 2004; Lee &amp; Labroo, 2004). Thus, for most brands with low levels of hedonism, higher tweet readability means higher processing fluency which can reduce audience metacognitive difficulties and thus increase tweet engagement levels.&#13;
&#13;
At the same time, for products with high hedonic demand, lower familiarity and uniqueness may provide consumers with greater signals of value, with metacognitive difficulties increasing the appeal of the product by making it appear unique or unusual. More easily processed messages reduce the appeal of the product, possibly because they appear too familiar and therefore less consistent with the perception of uniqueness (Pocheptsova, Labroo, 2004；Pocheptsova, Labroo, and Dhar 2010).&#13;
&#13;
It is therefore hypothesised that text features associated with greater readability will be positively associated with consumer engagement with the message. However, given the presence of brand hedonistic features, it can be argued that low readability of messages may increase consumer engagement in brand tweets with higher levels of hedonism instead.&#13;
&#13;
Data collection for the experiment&#13;
From the above, whether the readability of the tweet text and the level of brand hedonism of the brand to which the tweet belongs combine to influence consumer engagement with the brand's social tweets must be determined.&#13;
&#13;
Instagram was chosen because it is one of the world's most popular social networks, with around one billion active users per month, and over two-thirds of the Instagram audience is under the age of 34, making the platform particularly attractive to marketers. At the same time, Instagram is an open public platform and information on experiments can be easily accessed by searching for the brand name to use for experiments. This included the number of followers of the brand, the history and content of the tweets, the number of comments and the number of likes. To make the experiment practical, 20 tweets from each of the 20 brands (see Step 1 below) were selected for the experiment. The process of collecting information was as follows.&#13;
&#13;
Step 1 involved the selection of the experimental subject brands. The results of a hedonistic study of the TOP 100 most valuable brands in the world on the Forbes list (Davis et al., 2019) were used to rank the brands from the highest to lowest level of hedonism using the hedonism index (from Davis et al 2019 survey, for detail see Degree of brand hedonism) as the key indicator. A computer generated a random series of 20 numbers from 1 to 100, and the numbers in this series were used to correspond to the serial numbers of the brands in the hedonism table. The following 20 brands for this experiment were selected: Goldman, Sachs, HSBC, Walmart, Thomson Reuters, IBM, Subway, Verizon, HP, Hyundai USA, Boeing, Chanel, Coach, ESPN, Starbuck coffee, Nike, Gucci, amazon, Mercedes-Benz, Google, Porsche（For the logic behind the selection of these brands, please see Degree of brand hedonism）. &#13;
&#13;
In Step 2, text samples and audience engagement data were collected. In order to control the variables of the experiment as much as possible, text samples of tweets were collected from August 12 to August 13, 2021, and only tweets with 30-150 words were selected to control the discrete nature of the sample. To avoid the influence of rich media such as video/audio on audience engagement, tweets in the form of rich media were also excluded from the sample, ensuring that all samples contained only images and textual content. The number of likes and comments on each tweet was also recorded. To ensure that the selected sample of tweets accumulated enough likes and comments, all samples were posted before 7 August, ensuring that they had five days to accumulate interaction data with the audience. According to the official Twitter report (Twitter，2016), due to the instantaneous nature of the social media platform, in general tweets were largely ignored by audiences a week after they were posted and they therefore found it difficult to accumulate further feedback data.&#13;
&#13;
Step 3 was the readability analysis of the text samples. Considering that some of the tweet samples were less than 100 words, and that The Flesch Reading Ease formula recommends a text count of 100 words or more, and considering the validity of the formula, this experiment combined two or more samples for tweets with a text count of fewer than 100 words to obtain at least 100 words before using the formula for analysis , so as to the average readability score for this group of samples was calculated (See Message readability for details of the Flesch Reading Ease formula)&#13;
&#13;
Variables and measures&#13;
Message readability &#13;
Readability formulas have evolved to the point where there are now over 40 readability formulas (Heydari, 2012). The most widely known of these is Rudolph Flesch's formula, created in 1948 and published in the Journal of Applied Psychology in his article ' A New Readability Yardstick'. This formula is considered to be one of the oldest and most accurate formulas for readability, and has made Flesch an authority on readability scholarship. It was originally created to assess the readability of readers at grade level and is widely regarded as an accurate measure without much scrutiny. The formula is best suited to school texts, but it is also widely used by US government agencies (including the US Department of Defense) to assess the readability of their published documents and forms, and some states even require insurance policies to achieve a Flesch reading-ease score of 45 or higher. The Readability Formula is even installed in Microsoft Office Word, where the program checks the spelling and grammar of a text as well as its readability level (Heydari, 2012).&#13;
&#13;
The specific mathematical formula is as follows: &#13;
RE = 206.835 – (1.015 x ASL) – (84.6 x ASW)&#13;
RE = Readability Ease the output is a number ranging from 0 to 100. The higher the number, the easier the text is to read&#13;
ASL = Average Sentence Length (i.e., the number of words divided by the number of sentences)&#13;
ASW = Average number of syllables per word (i.e., the number of syllables divided by the number of words)&#13;
&#13;
 &#13;
&#13;
Table1: Description and predicted reading grade for Flesch Reading Ease Score (Stein, 1984)&#13;
Score	School level (US)	Notes&#13;
100.00–90.00	5th grade	Very easy to read. Easily understood by an average 11-year-old student.&#13;
90.0–80.0	6th grade	Easy to read. Conversational English for consumers.&#13;
80.0–70.0	7th grade	Fairly easy to read.&#13;
70.0–60.0	8th &amp; 9th grade	Plain English. Easily understood by 13- to 15-year-old students.&#13;
60.0–50.0	10th to 12th grade	Fairly difficult to read.&#13;
50.0–30.0	College	Difficult to read.&#13;
30.0–10.0	College graduate	Very difficult to read. Best understood by university graduates.&#13;
10.0–0.0	Professional	Extremely difficult to read. Best understood by university graduates.&#13;
&#13;
As can be deduced, the text samples should ideally contain short sentences and words. As most texts on social media are short sentences or words, the Flesch Reading Ease Score was considered to be the most suitable tool for measuring the readability of tweets in this experiment. The Flesch Reading Ease readability formula in the online automatic readability checker was used in this study (https://readabilityformulas.com/free-readability-formula-tests.php).&#13;
&#13;
Consumer engagement with brands&#13;
As Instagram retweets can only be sent to friends or groups of friends and not to the user's public page, this experiment only measured the number of "likes" (users click on the red love button below the tweet or double click on the tweet to like it) and comments on the tweet, as retweet data is difficult to collect. As described in the data collection process, the collected tweets were given at least 5 days to accumulate comments and likes. These two numbers (comments+likes) were then added together and divided by the number of brand trackers and multiplied by 10,000 to obtain the final audience engagement level score.&#13;
&#13;
Degree of brand hedonism&#13;
As this experiment was limited by resources and practicability, the results of the Davis et al 2019 survey on the level of brand hedonism were used directly here. The following is an introduction to the process of Davis et al.'s 2019 survey on levels of brand hedonism which measured the level of hedonism of 100 brands primarily by human judges on a rating scale (four non-social media active brands were finally excluded, giving a final total of 96 brands).&#13;
&#13;
In the Davis et al. experiment, a total of 200 human judges participated in scoring the level of brand hedonism. Each judge was randomly assigned to 10 brands and they scored each brand on four hedonism-related indicators: fun, excitement, thrill and pleasure, on a scale of 1 'not at all' to 7 'very much'. The final brand hedonism index was derived from these four indicators and then averaged across the 10 judges. The judges who participated in the experiment were recruited from the Amazon Mechanical Turk online panel. A total of 200 judges participated in the experiment, 61% of whom were male and the remainder female, all aged 35 years and of unknown ethnic background, but all participants were US residents. Detailed results of the original experiment can be found in Appendix A.&#13;
&#13;
In this particular experiment, the brands were ranked from the highest to lowest hedonism level using the hedonism index of Davis et al. A computer generated a random series of 20 numbers from 1-96, and the numbers in this series were used to correspond to the serial numbers of the brands in the hedonism table as the experimental subjects. Table 2 shows the average hedonism scores of the 20 brands selected. Figure 1 shows the conceptual model for this experiment, the relevant experimental variables and the control variables.&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
Table 2 the brand hedonism scores&#13;
&#13;
NO	Brands	Mean	SD&#13;
1	Porsche	6.05 	1.26 &#13;
2	Google	5.92 	0.95 &#13;
3	Mercedes-Benz	5.68 	1.13 &#13;
4	Amazon&#13;
5.41 	1.59 &#13;
5	Gucci	5.29 	1.13 &#13;
6	Nike	5.05 	1.40 &#13;
7	Starbucks Coffee	4.89 	1.19 &#13;
8	ESPN	4.75 	1.93 &#13;
9	Coach. Inc	4.53 	1.60 &#13;
10	Chanel	4.40 	1.26 &#13;
11	Boeing	4.27 	1.77 &#13;
12	Hyundai USA	4.12 	1.45 &#13;
13	HP	3.86 	1.75 &#13;
14	Subway	3.75 	1.67 &#13;
15	Verizon	3.75 	1.36 &#13;
16	IBM	3.45 	1.47 &#13;
17	Walmart	3.15 	1.39 &#13;
18	Walmart	3.15 	1.39 &#13;
19	HSBC	2.89 	1.35 &#13;
20	Goldman Sachs	2.14 	1.23 &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2477">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2478">
                <text>Data/Excel.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2479">
                <text>Wu 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2480">
                <text>Chloe Keung, Elena Ball</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2481">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2482">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2483">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2484">
                <text>Word</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2485">
                <text>LA1 4YZ</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2486">
                <text>Robert Davies</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2487">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2488">
                <text>Marketing </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2489">
                <text>20 tweets from each of the 20 brands&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2490">
                <text>Regression</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="110" public="1" featured="0">
    <fileContainer>
      <file fileId="107">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d8acdd6e35b9e568f302f663b5586651.csv</src>
        <authentication>19d1bf01524769b5b55a3256b6cf49ae</authentication>
      </file>
      <file fileId="108">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f98c0a911d3895913f9cfa1c92377726.csv</src>
        <authentication>513a85662bbc1b8ef486ceb1c3bb1228</authentication>
      </file>
      <file fileId="113">
        <src>https://www.johnntowse.com/LUSTRE/files/original/fd2c2252480cd0452daa1b6edbb6a741.doc</src>
        <authentication>1fce62672b69edac5730fa2715adf854</authentication>
      </file>
    </fileContainer>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2416">
                <text>Age-related Changes to the Attentional Modulation of Temporal Binding</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2417">
                <text>Jessica Pepper</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2418">
                <text>08.09.2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2419">
                <text>In multisensory integration, the time range within which visual and auditory information can be perceived as synchronous and bound together is known as the temporal binding window (TBW). With increasing age, the TBW becomes wider, such that older adults erroneously, and often dangerously, integrate sensory inputs that are asynchronous. Recent research suggests that attentional cues can narrow the width of the TBW in younger adults, sharpening temporal perception and increasing the accuracy of integration. However, due to their age-related declines in attentional control, it is not yet known whether older adults can deploy attentional resources to narrow the TBW in the same way as younger adults.&#13;
This study investigated the age-related changes to the attentional modulation of the TBW. 30 younger and 30 older adults completed a cued-spatial-attention version of the stream-bounce illusion, assessing the extent to which the visual and auditory stimuli were integrated when presented at three different stimulus onset asynchronies, and when attending to a validly-cued or invalidly-cued location. &#13;
A 2x2x3 mixed ANOVA revealed that when participants attended to the validly-cued location (i.e. when attention was present), susceptibility to the stream-bounce illusion decreased. However, crucially, this attentional manipulation affected audiovisual integration in younger adults but not in older adults. Whilst no definitive conclusions could be drawn about the width of the TBW, the findings suggest that older adults have multisensory integration-related attentional deficits. Directions for future research and practical applications surrounding treatments to improve the safety of older adults’ perception and navigation through the environment are discussed. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2420">
                <text>Ageing, attention, TBW, multisensory integration, stream-bounce illusion</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2421">
                <text>Pre-screening tools&#13;
Participants were asked to complete two pre-screening questionnaires using Qualtrics survey software (www.qualtrics.com), to assess their eligibility for the study.&#13;
Speech, Spatial and Quality of Hearing Questionnaire (SSQ; Appendix A; Gatehouse &amp; Noble, 2004). Participants rated their hearing ability in different acoustic scenarios using a sliding scale from 0-10 (0=“Not at all”, 10=“Perfectly”). Whilst, at present, no defined cut-off score on the SSQ is available as a parameter to inform decision-making, previous studies have indicated that a mean score of 5.5 is indicative of moderate hearing loss (Gatehouse &amp; Noble, 2004). As a result, people whose average score on the SSQ was lower than 5.5 were not eligible to participate in the experiment.&#13;
Informant Questionnaire on Cognitive Decline in the Elderly (IQ-CODE; Appendix B; Jorm, 2004). Participants rated how their performance in certain tasks now has changed compared to 10 years ago, answering on a 5-point Likert scale (1=“Much Improved”, 5=“Much worse”). An average score of approximately 3.3 is the usual cut-off point when evaluating cognitive impairment and dementia (Jorm, 2004), therefore people whose average score was higher than 3.3 were not eligible to participate in the experiment. &#13;
The mean scores of each pre-screening questionnaire are displayed in Table 1. An independent t-test revealed that there was no significant difference between age groups on the SSQ questionnaire [t(58) = -1.15, p=.253]; however, there was a significant difference between age groups on the IQ-CODE questionnaire [t(58) = -13.29, p&lt;.001].&#13;
Experimental Design&#13;
This research implemented a 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(Stimulus Onset Asynchrony [SOA]: Visual Only [VO] vs 0 milliseconds vs 150 milliseconds vs 300 milliseconds) mixed design, with Age as a between-subjects factor and Cue and SOA as within-subjects factors.&#13;
The experiment consisted of 16 different trial conditions (Table 2), randomised across all participants. Replicating the paradigm used by Donohue et al. (2015), the experimental block contained 72 validly-cued trials and 24 invalidly-cued trials, which were equally distributed between each side of the screen (left/right) and SOA conditions; this means that each participant completed 144 valid trials and 48 invalid trials for each SOA.  &#13;
&#13;
Stimuli and Materials&#13;
Participants completed the experiment remotely, in a quiet room on a desktop or laptop computer with a standard keyboard. All participants were asked to wear headphones/earphones. A volume check was conducted at the beginning of the experiment; participants were presented with a constant tone and asked to adjust the volume of this tone to a clear and comfortable level. &#13;
The stimuli used in the task were replicated from Donohue et al. (2015). Each trial started with an attentional cue in the centre of the screen – a letter “L” or a letter “R” instructing participants to focus on the left or the right side of the screen. In addition to this, 2 pairs of circles were positioned at the top of the screen, one pair in the left hemifield and one pair in the right hemifield. The attentional cue lasted for 1 second, and 650 milliseconds after this cue disappeared, the circles in each pair started to move towards each other downwards diagonally (i.e. the two left circles moving towards each other and the two right circles moving towards each other). &#13;
In the trials, one pair of circles moved towards each other, intersected, and continued on the same trajectory (fully overlapping and moving away from each other). This full motion of the circles formed an “X” shape, with the circles appearing to “stream” or “pass through” each other. On the opposite side of the screen, the other pair of circles stopped moving before they intersected, forming half of this “X” motion. On 75% of the trials, the full “X”-shaped motion appeared on the side of the screen that the cue directed participants towards (validly-cued trials); on the other 25% of trials, the full motion occurred on opposite side of the screen to where the cue indicated, and the stopped motion occurred at the cued location (invalidly-cued trials).&#13;
In addition to these visual stimuli, on 75% of the trials, an auditory stimulus was played binaurally (500Hz, 17 milliseconds), either at the same time as the circles intersected (0ms delay), 150ms after the intersection or 300ms after the intersection. The remaining 25% of the trials were visual-only (i.e. no sound was played). Participants were told that regardless of whether a sound was played, they must make their pass/bounce judgements based on the full motion of the circles (the “X” shape), even if the full motion occurred at the opposite side of the screen that they were attending to. &#13;
The experiment ended after all 768 trials – participation lasted approximately 1 hour. The experiment was built in PsychoPy2 (Pierce et al., 2019) and hosted by Pavlovia (www.pavlovia.org). &#13;
&#13;
Procedure&#13;
Prior to the experiment, a brief meeting was organised between the participant and the researcher via Microsoft Teams, to explain the task and answer any questions. Participants were emailed a link to a Qualtrics survey, which included the participant information sheet, consent form, demographic questions and pre-screening questionnaires. If the person was deemed eligible to take part in the experiment, Qualtrics redirected participants to the experiment in Pavlovia.&#13;
Participants were then presented with instructions detailing the attentional cue elements of the task and asking them to base their judgements on the full X-shaped motion of the stimuli. Participants were asked to press M on the keyboard if they perceived the circles to “pass through” each other or press Z if they perceived the circles to “bounce off” each other, answering as quickly and as accurately as possible. &#13;
Participants completed a practice block of 10 trials, then the test session commenced. After each set of 10 random trials, participants had the opportunity to take a break. Participants were provided with a full debrief upon completion of the experiment, and all participants could enter a prize draw to win one of two £50 Amazon vouchers.&#13;
&#13;
Statistical Analyses&#13;
This study required two separate mixed ANOVAs to analyse main effects and interactions, investigating significant differences between groups and conditions.&#13;
Reaction Times. &#13;
For the first dependent variable of reaction times (RT), mean RTs were calculated for each participant in each Cue x SOA condition, representing the time taken, in milliseconds, for each participant to press M or Z on the keyboard at the end of each trial. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(SOA: 0ms vs 150ms vs 300ms x Visual-Only) mixed ANOVA was then conducted on these mean RTs. &#13;
Bounce/Pass Judgements. &#13;
For the second dependent variable of the bounce/pass judgements, the percentage of “Bounce” responses provided in each Cue x SOA condition was calculated for each participant. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 3(SOA: 0ms vs 150ms vs 300ms) mixed ANOVA was then conducted on these percentage data. Visual-Only (VO) trials were compared separately for valid and invalid conditions using a paired samples t-test. Post-hoc paired samples t-tests were also used to investigate significant differences between the 0ms, 150ms and 300ms SOA conditions. &#13;
Bounce/Pass Judgements: Pairwise comparisons. To analyse pairwise comparisons in the significant interaction of Age and Cue, responses in each SOA condition were collapsed – that is, a grand mean percentage of “Bounce” responses was calculated by averaging the percentage of “Bounce” responses in the 0ms, 150ms and 300ms trials in the Valid condition and in the Invalid condition. This produced an overall Valid and an overall Invalid mean percentage of “Bounce” responses for each participant. A 2(Age: Younger vs Older) x 2(Collapsed Cue: Valid vs Invalid) mixed ANOVA was conducted on this collapsed data to investigate differences between the proportion of “Bounce” responses in the Valid and Invalid condition for younger adults, and in the Valid and Invalid condition for older adults. In addition, 2 separate one-way ANOVAs were conducted on this collapsed data (Age as the between-subjects factor, and Valid or Invalid as the within-subjects factor) to investigate differences between younger and older adults in the Valid condition, and differences between younger and older adults in the Invalid condition (Laerd, 2015). &#13;
Significance. &#13;
An alpha level of .05 was used for all statistical tests. Any responses (judgements or RTs) that were ±3 standard deviations from the mean were considered anomalous and were removed from the analyses. Mauchly’s test of sphericity was violated for the main effect of SOA, therefore Greenhouse Geisser adjusted p-values were used where appropriate. As an a-priori power analysis determined the desired sample size for this study, and this sample size was achieved, non-significant results will not be due to the study being underpowered. Statistical analyses were conducted using SPSS (version 25, IBM).&#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2422">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2423">
                <text>xlsx file</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2424">
                <text>Pepper 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2425">
                <text>Hamish Bromley</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2426">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2427">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2428">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2429">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2430">
                <text>Lancaster University, LA1 4YW.</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="107" public="1" featured="0">
    <fileContainer>
      <file fileId="91">
        <src>https://www.johnntowse.com/LUSTRE/files/original/72b66d7f2cb9a6e2e5f00da8d5935d36.PNG</src>
        <authentication>04ce111afe807bdc60d1203e751d74a1</authentication>
      </file>
      <file fileId="92">
        <src>https://www.johnntowse.com/LUSTRE/files/original/9cba03c4db3bbef2bc6e97be96d2e587.csv</src>
        <authentication>07d49477d1a4599f86e2e0e1c7069ede</authentication>
      </file>
      <file fileId="102">
        <src>https://www.johnntowse.com/LUSTRE/files/original/29f04fbd256632c62f9a4bccfcd84b06.csv</src>
        <authentication>6eff634a9c57771aadb5bdb0f6c6c42b</authentication>
      </file>
      <file fileId="103">
        <src>https://www.johnntowse.com/LUSTRE/files/original/a62db1d7439ae4c8b5dd214d8a8ffa5a.csv</src>
        <authentication>134388ec9bef40df4ea8ac7e504edbca</authentication>
      </file>
      <file fileId="106">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f91a96c95f9d541594ac391b75ae0324.pdf</src>
        <authentication>644a7a8c120a99890ed20ab50f3b581e</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2366">
                <text>Comparison of Ethical Decision-Making in Emergency Service Workers and Laypeople </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2367">
                <text>James Wright</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2368">
                <text>08/09/2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2369">
                <text>The Trolley Problem is a theoretical ethical dilemma in which it is asked whether it is morally acceptable to actively kill one person to save five (Thomson, 1976). Emergency service workers (ESW) are often presented with ethical dilemmas, such as whether to resuscitate someone who does not want to be resuscitated (Guru et al., 1999). The present study investigated the differences in decisions made when faced with variations of the Trolley Problem between laypeople (non-ESW) and ESW. The effect of time pressure on making these decisions was also investigated, measured through response time. 99 participants were tested, 47 laypeople and 52 ESW. Participants were presented with five different Trolley Problem dilemmas wherein they could passively allow five people to die, or to make an active decision to sacrifice one person to save the others. These dilemmas had distinct variations, such as the one person being a co-worker, or where participants had to physically push and kill a large man. Half the participants were placed into a time pressure condition, and were told that they had a time limit in which to respond, when no time limit existed. Results showed that neither occupation nor time pressure significantly affected response time or participant choice. Further analysis suggested some interaction effects between occupation, time pressure, and specific dilemma types. Implications such as suggested training practices for ESW will be discussed. Criticisms of the methodology and recommendations for future research will also be discussed.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2370">
                <text>Trolley Problem, ethical dilemmas, time pressure, emergency service workers, decision-making.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2371">
                <text>Method&#13;
Sample&#13;
This project aimed to use a total of 112 participants, with 56 of these being ESW, and 56 being laypeople. This number was calculated using the G*Power software, using an alpha of .05, power of .8, and a medium expected effect size of .35, using five levels of measurement. &#13;
In total, 99 participants were gathered for the present study. 47 of the sample were laypeople, whilst the other 52 were ESW. Of these, 22 were police officers, and 30 were ambulance crewmembers. Overall, ESW had an average of 7.7 years of experience (SD = 8.29), with ambulance staff having an average of 10.14 years (SD = 9.89), and police having an average of 4.52 years (SD = 4.17). Unfortunately, no other emergency service branches such as coast guard or firefighters completed the study.&#13;
A gender split of 47 males to 48 females was gathered, along with an average age of 35.65 years old (SD = 12.98). Three participants declined to disclose their gender, and one participant identified as agender. &#13;
Ethical Approval and Pre-Registration&#13;
This study gained ethical approval on 13/04/2021, from members of the Psychology department at Lancaster University.&#13;
This study was also pre-registered on the Open Science Frameworks website on 17/05/2021. This can be found at the following link: https://osf.io/4ecjg/?view_only=95615bd16f2c4a9db88dd77543780ec2&#13;
Materials&#13;
Survey&#13;
The present study was delivered through a Qualtrics survey file, created fully by the researcher. The survey contains standard psychological research documents, such as an information page, consent form, demographic information page, and debriefing. The survey also contains two sets of five vignettes describing ethical dilemmas for each condition of the experiment. &#13;
Demographics&#13;
Participants are asked to provide some demographic information: age, gender, and occupation. Participants are given options for occupation, including police, fire, or ambulance, as well as an option for ‘other’ emergency services, where a free typing box is presented. This is to cover occupations outside of the main three emergency services, such as coastguard or mountain rescue. If participants are not ESW, they have the option to say they are not a member of the emergency services. &#13;
Ethical Dilemmas&#13;
The present study tests a set of five ethical dilemma vignettes. To read each dilemma, see Appendix A. Each vignette describes a version of the Trolley Problem, where there is an out-of-control trolley (the word “tram” is used to make it clearer to British participants) speeding down the tracks towards a group of five people. For each dilemma, there is an active choice, or a passive choice, which entails sacrificing one life to save five, or allowing five people to die to avoid killing one person. Each dilemma presents a different single person who could be placed in danger, these are: a non-descript person, an elderly person, a co-worker, a large man, and the “culprit”. &#13;
Non-Descript Person. This dilemma is a traditional retelling of the Trolley Problem. Participants are told that there is an out-of-control trolley speeding down the tracks, towards five people who are stranded. Participants are told that they have the choice to pull a lever and divert the trolley onto a different track, however there is one person stranded on those tracks. The decision participants are faced with here is whether to make an active choice or a passive choice. The active choice is to pull the lever, diverting the trolley and saving the five, whilst sacrificing the individual. The passive choice is to not pull the lever, allowing the trolley to hit the five people, whilst saving the individual.&#13;
It is often found that people sacrifice one person to save five in this dilemma (Thomson, 1976; Greene, 2016). Responses to this condition demonstrate how people weigh up lives on a strictly numerical basis, knowing nothing about the traits of the person. By having a condition in which participants know nothing about the person on the tracks, this can be compared to responses when it is an elderly person or a co-worker on the tracks.&#13;
Elderly Person. This dilemma is the same as the non-descript person dilemma, however participants are told that the person on the tracks is elderly.&#13;
This condition has been found to affect how people respond to the Trolley Problem, with people being more likely to sacrifice the elderly person over any other ages (Kawai et al., 2014). This is interesting in the study of moral psychology, as it shows how people weigh up the worth of lives based on certain attributes, such as age. This can also be compared to how people respond when they know nothing about the person on the tracks. This is also important to investigate in an ESW context, as elderly people are more likely to be admitted to hospital (Burns, 2001), leading ambulance crews to encounter them more often.&#13;
Co-Worker. This dilemma is the same as the non-descript person dilemma, however participants are told that the person on the tracks is one of their co-workers.&#13;
This dilemma was chosen based on past research suggesting that participants are less likely to sacrifice people they perceive to be part of their identity in-group (Swann Jr et al., 2010). This is a relevant factor to investigate as part of a study into ESW, a group who develop strong in-group feelings, including having better self-care and social support (Shakespeare-Finch et al., 2002). This is also interesting when investigating ESW populations such as firefighters or police, who may be placed into situations where a co-worker is in danger whilst trying to save members of the public. This dilemma demonstrates how ESW weigh up the lives of their co-workers compared to strangers.&#13;
Large Man. In this dilemma, participants are told that there are five people on the tracks, and stood next to them is a large man. Participants are told that if they push the large man into the tracks, that would stop the trolley and the five people would be saved. The decision participants are faced with here is whether to make an active choice and push the large man onto the tracks, stopping the trolley and saving the five, or to make a passive choice and allow the trolley to hit the five people.&#13;
This is a version of the “Footbridge Dilemma”, in which it is found participants are typically less willing to make the active decision and push the man (Nichols &amp; Mallon, 2006). It is an interesting take on the Trolley Problem dilemma, as it forces participants to make a more physical decision through pushing and directly causing a person’s death, as opposed to pulling a switch which then indirectly leads to someone’s death. This is also relevant in the study of ESW, who tend to work directly and physically with people as opposed to making indirect decisions. &#13;
Culprit. This dilemma is the same as the Large Man dilemma, however rather than a large man, participants are told that stood next to them is the “culprit”. The “culprit” is explained to participants as the person who stranded the other five people on the tracks. &#13;
This dilemma was chosen as it tests how people respond to the same physical pushing decision as the Large Man condition, however when the person they can push is not an innocent bystander, and instead is someone who is trying to end the lives of others. This allows for the investigation of how people weigh the lives of criminals compared to innocent people. This is also interesting in the study of ESW, especially when regarding police, since their occupation involves apprehending criminals so they can then be sentenced, not choosing the punishment based on their own moral reasoning.&#13;
Time Pressure&#13;
Participants who are assigned to the Time Pressure condition are told both during instructions and above each dilemma that they only have a limited amount of time to make their decision. They are told that after that time has passed, they may not be able to provide a response. This is not true, there are no time limits on any question. This is to attempt to simulate time pressure, by making participants feel they have limited time to react.&#13;
Overall, 52 participants were assigned to the Time Pressure condition, and 47 were assigned to No Time Pressure. A more equal split was aimed for, however was not possible due to the number of incomplete responses interfering with the equal randomisation of conditions.&#13;
Response Time&#13;
The decision-making speed is automatically recorded by Qualtrics, determining how long it took participants to finalise their decision. This is taken as the time from when participants opened a vignette, until they submitted their response. It was decided that the response time would be taken at the point the choice is submitted, as opposed to the last button press participants made. This is as it cannot be certain at what point participants have finished considering their response. They may still be thinking about their answer after selecting the option, but before submitting. Therefore, it cannot be assumed that the final button press was the end of their decision-making. &#13;
Justification&#13;
After each decision, participants are asked to briefly explain why they made the decision they did, imagining they are speaking to a close friend. This ensures participants think deeper into the decision they make, as they know they will have to defend it. This is presented to participants as a free entry text box, shown after each dilemma they respond to.&#13;
Pilot Study&#13;
The present study was first piloted on an ESW member, in this case a senior paramedic, to test for validity of the ethical dilemmas as well as any other issues with the survey. The only negative feedback received was that some of the dilemmas looked visually similar on the page, and could be mistaken for being the same as the dilemma before. To resolve this, a section reminding participants to read carefully since every dilemma was different was added, as well as formatting changes such as boldening the critical sections of text to make them more obviously different.&#13;
Procedure&#13;
Participants were recruited via social media, ESW were gathered via the Our Blue Light ESW charity’s social media pages, as well as being sent around stations via the researcher’s contacts. Laypeople were also gathered through social media, with some being recruited from the Our Blue Light pages, as well as through friends and family of the researcher.&#13;
Participants had access to the study through a link, which took them to the introduction page of the present study. After reading this and giving consent, the study began. Participants were randomly assigned by the Qualtrics software to either the Time Pressure or No Time Pressure condition. This affected which set of instructions they saw. Participants were all shown each of the five dilemmas, presented one by one on their screen. The dilemmas were presented in a randomised order for each participant, to avoid any order effects. Following each dilemma, participants were presented with the justification question and free entry text box. After repeating this for each dilemma, participants were presented with a debrief page, and the study concluded.&#13;
Data Analysis&#13;
To examine the choices ESW made compared to laypeople, a 2x2 chi square test will be conducted. A 2x2 chi square test will also be conducted to examine the choices made by those in the time pressure condition against those who were not. Descriptive statistics will also be presented, including the counts of each choice made separated into groups, along with means and standard deviations of response time.&#13;
In order to analyse the impact of Occupation, Time Pressure, and Type of Ethical Dilemma on the decisions participants make, a generalised linear mixed-effects model will be used (Baayen et al., 2008). The statistical family used for this model will be binomial. This test was chosen as the dependent variable here, participant choice, is a categorical variable with two options (push or no push). There are also three categorical independent variables, two of which are between-subjects factors (ESW v Layperson, Time Pressure v No Time Pressure), and one within-subjects factor (Type of Ethical Dilemma). The only random effect to be used in the model is individual subjects, as each independent variable is critical to the present study, and so will be treated as fixed effects.&#13;
To compare the response time between ESW and laypeople, as well as time pressured participants and participants with no time pressure, two one-way ANOVAs will be conducted. This was chosen as the intention here is to compare performance between two independent groups. A 2x2 ANOVA on sum scores was considered, however was not possible due to participants having simultaneous membership of two groups (e.g. ESW + Time Pressure, ESW + No time pressure).&#13;
To further analyse participant response times to the ethical dilemmas, a 2x2x5 Mixed ANOVA will be conducted. This was chosen as the method of analysis as one aim of the present study is to compare variance between ESW and laypeople, as well as participants being under time pressure or not. There is also the factor of ethical dilemma, which has five levels due to there being five different dilemmas.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2372">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2373">
                <text>Main Data_35645845/Excel.csv , 35645845 Occupation Response Time Sum Scores/Excel.csv , 35645845 Time Pressure Response Time Sum Scores/Excel.csv, 35645845_RStudio Code/RStudio.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2374">
                <text>Wright2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2375">
                <text>Paige Givin &amp; Chloe Crawshaw</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2376">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2377">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2378">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2379">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2380">
                <text>LA1 4YW</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2381">
                <text>Prof. Nicola Power</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2382">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2383">
                <text>Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2384">
                <text>99</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2385">
                <text>ANOVA, Chi-Squared, Linear Mixed Effects Modelling</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="106" public="1" featured="0">
    <fileContainer>
      <file fileId="134">
        <src>https://www.johnntowse.com/LUSTRE/files/original/722ae4ceef6a14d9bbfc8bca41b825cf.pdf</src>
        <authentication>657e3892388b2f3c175c84267315a3bb</authentication>
      </file>
    </fileContainer>
    <collection collectionId="11">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="987">
                  <text>Secondary analysis</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2352">
                <text>Film language affecting behaviour: A psycholinguistic approach</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2353">
                <text>Aleksandra Tuneski</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2354">
                <text>2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2355">
                <text>Films are a popular form of art and entertainment that enable people to enjoy a story through multiple stimuli perception and stimulation of emotions. Plenty are the film elements that impact the audience’s attitude towards the film, yet language style has rarely been taken in consideration for research. This study focused on examining whether there exists a relationship between the audience’s favouritism for films and the linguistic style present in them, predominantly concentrating on emotional factors of language in films. A dataset containing the widest public ratings of films was obtained from the Internet Movie Database platform and paired with respective transcribed film dialogues provided by OpenSubtitles.org. The corpora’s transcripts (n=88,573) were analysed using the Linguistic Inquiry and Word Count software and all the variables produced were then correlated with IMDb’s weighted film ratings. The project found that all types of emotions present in transcripts of film language were significantly, negatively associated with the IMDb rating outcomes, while the effect sizes were small. This finding suggests there might be an inclination for emotions to be felt in other areas of stimuli perception, rather than verbal language, when it comes to films. Additional exploratory analyses showed how other variables correlated with film rating scores and practical application of study findings within the advertising industry were identified.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2356">
                <text>Pearson’s correlation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2357">
                <text>Dataset&#13;
&#13;
The dataset used for the study is purely secondary and consists of transcribed film dialogues (N=88,573) complemented with each film’s respective Internet Movie Database (IMDb) rating, which at the time of collection had a minimum of 100 user ratings per film. IMDb is an online film rating platform where the wider audience must register for an account and is then able to rate and review the films they have watched. Registered IMDb members rate films on a 10-point scale, with 1 indicating “terrible” and 10 indicating “excellent” (Boyd et al., 2020). IMDb’s rating algorithms produce ratings that are weighted by metrics associated with users, rather than average ratings. Although the algorithms are unavailable to the public, IMDb’s rating system has shown consistency across all films because the weighted ratings constantly provide reliability by reducing the possibilities of a small group of users to take advantage of the rating system (IMDb, 2021). IMDb is one of the most popular and authoritative film rating websites, where the total ratings of a film are anonymous and voluntarily provided (Sawers, 2015). &#13;
&#13;
The transcribed film dialogues data was provided by OpenSubtitles.org and the corpora was previously organised and used in a study by Boyd et al. (2020); it was generally provided by the authors for the purpose of this project. OpenSubtitles.org is an online website that provides transcribed and translated captions of motion pictures, audio files and various other audio-visual files (OpenSubtitles.org, 2021). The corpora used by Boyd et al. (2020) contains purely English-language film subtitles, corresponding to films originally released in English, or foreign films whose dialogues have been translated to English. Boyd et al. (2020) combined the transcribed film dialogues provided by OpenSubtitles.org with the IMDb ratings, along with other IMDb categories such as film genre, year of release, country of production, et cetera. Almost 90% of the IMDb categories linked to the films’ ratings are irrelevant for the purpose of this project, thus solely the film ratings will be taken in consideration for analysis.  &#13;
&#13;
Automated Textual Analysis Software (LIWC)&#13;
&#13;
To conduct the automated textual analysis, this research project will use the Linguistic Inquiry and Word Count (LIWC) tool; also called “Luke”. LIWC is a textual analysis program that measures the degree to which various dimensions of words are used in a text (Tausczik &amp; Pennebaker, 2010). LIWC program has two central features – the processing component and the dictionaries. The processing feature takes a text file and analyses it word by word, comparing each word with the dictionary files, sorting the word out as, for example, verb or second person pronoun (Boyd, 2017). Once the program finishes running, it produces an output where all the LIWC categories used in the text are listed, as well as the rates and percentages that each category was used in the given text. &#13;
&#13;
The dictionaries are at the heart of the LIWC program and they identify the group of words that belong to each category (Pennebaker et al., 2015). When the program was being created, the authors aimed at developing measures to define emotions present in words, cognitive processes, signs of self-reflection, et cetera, and in order to assign a psychological component to words, human judges contributed in developing the categories LIWC possesses today (Boyd, 2017). Across approximately 80 dimensions (see Appendix A), LIWC analyses the text in relation to various parts of speech, thinking styles, social concerns and emotions (Pennebaker et al., 2001). For example, the “positive emotion” category contains words such as “love”, “happy” and “nice”, while the “cognitive processes” category comprises words like “examine”, “think” and “understand”. &#13;
&#13;
Over the years, LIWC has been able to uncover psychological patters and personalities purely from textual analysis; Petrie et al. (2008) used LIWC to investigate the Beatles’ lyrics and found out that it was possible to distinguish each songwriter’s unique language style, and also to discover whose Beatle’s style was predominant in collaboratively written songs. Researches have shown LIWC to be one of the most reliable automated textual analysis tools that is able to uncover and predict psychological implications residing in written sources, thus this study will employ this tool to test its hypothesis. &#13;
&#13;
Data Preparation and Analysis&#13;
&#13;
The initial corpora was subjected to cleaning procedures, where data which did not meet all inclusion criteria was removed from the dataset. The inclusion criteria consisted of film ratings having at least 100 user votes, transcribed dialogues having at least 100 words and corpora variables containing all data values. The cleared dataset (N=85,130) is going to be tested in the LIWC program, where each word within the transcripts will be counted and sorted among the LIWC dictionary categories it belongs to. For the main hypothesis, the program will analyse the dataset for LIWC variables that have been shown to be correlated with positive and negative evaluations in the past. This way, the quantified rates of positive and negative emotion words in each dialogue will be identified. Once the rates have been extracted, a bivariate Pearson’s correlation will be conducted to assess whether there exists a significant relationship between positive and negative emotion words in film dialogues and their IMDb ratings. Additionally, exploratory analyses will be run to search for significant relationships between the dataset variables and the film ratings, again by conducting Pearson’s correlation tests between the ratings and all LIWC variables produced.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2358">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2359">
                <text>Tuneski (2021)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2360">
                <text>Amy Austin and Lesley wu </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2361">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2363">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2364">
                <text>Secondary Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2365">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2956">
                <text>Ryan Boyd</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2957">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2958">
                <text>Language psychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2959">
                <text>88,573</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2960">
                <text>Pearson Correlation </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="105" public="1" featured="0">
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="17">
      <name>Software</name>
      <description>A computer program in source or compiled form. Examples include a C source file, MS-Windows .exe executable, or Perl script.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2345">
                <text>The effects of screen exposure on developmental skills among children at two and three years of age.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2346">
                <text>Afrah Alazemi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2347">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2348">
                <text>Previous research into the topic of children’s development has tended to take place in Western nations (Kuta, 2017; Martinot, 2021). One aspect of development is language development, and one aspect of research on that matter is the use of electronic devices, with the potential for consequent effects on children’s language abilities. This paper reviews and builds upon the scope of the available research, with its disparate findings, by offering research from the context of Kuwait, a non-western nation where parents tend to be in favour of their children having access to new technologies regardless of their age (Dashti &amp; Yateem, 2018). The increasing number of children being exposed to electronic devices of various descriptions raises concerns regarding the possible adverse effects of screen exposure on their development, particularly through displacement of educationally enriching activities, which provides the motivation here (Haughton, Aiken &amp; Cheevers 2015). Based on a review of the existing literature, the present research starts from the hypothesis that language development will be negatively correlated with media exposure. Valid data relating to 96 children of 24 to 36 months of age were collected using two questionnaires, one relating to the child’s knowledge of Arabic words on various topics (voices of animals, names of animals, vehicles, toys, food and drink, etc.) and the other quantifying the child’s daily screen time. Ordinary least squares analysis was performed using SPSS, version 26. While a statistically significant positive moderate correlation between language expression score and age was found – an increase in age was associated with an increase in language expression or the number of words understood and expressed – no significant effect of screen time on language expression was found after adjusting for age. This indicates, therefore, the value of employing non-western populations in research into cognitive development, and suggests the need for further research in order to attain generalisable findings.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2349">
                <text>Developmental Psychology </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2350">
                <text>The parents of a total of 100 participant children) took part in a questionnaire survey. The reports of 4 parents were excluded because their child’s age exceeded 36 months and the inclusion criteria for the study were set at 24 to 36 months. Participants were selected by means of opportunity sampling. An announcement was sent via WhatsApp to those of my contacts who had children of an age appropriate for inclusion in the study. Parents were recruited by sending a link to the survey through WhatsApp. Family and friends were then asked to deliver the WhatsApp number to those who they knew who had children within the set age range. &#13;
Parents read information about the study and their informed consent to participate in the questionnaire survey was obtained via Qualtrics. The Lancaster University Psychology Department gave ethical approval for the present study. &#13;
&#13;
Procedure&#13;
The data for the present work were gathered by means of an online questionnaire via Qualtrics between 7 June 2021 and 22 June 2021. During this time, participants submitted answers to two questionnaires: a) the Arabic CDI, which measures Arabic words arranged according to groups (for example voices of animals, names of animals, vehicles, toys, food and drink, etc.) to measure the child’s knowledge of the Arabic language (Abdel Wahab, 2020) and b) a questionnaire related to the number of hours the child spent in front of the screen , and their opinion of the appropriate amount of screen time which children can spend at their screens, as well as their control over their children’s viewing of the screens, and whether or not they are allowed to watch while sleeping and eating. The survey instruments were designed to measure the extent to which screen viewing is related to the language development of Kuwaiti children aged between two and three years.&#13;
Materials&#13;
CDI: The Arabic CDI language scale developed by Abdel Wahab (2020) is a questionnaire comprising a set of categories containing checklists for identifying variety and number of words. In front of each word there are three options (‘knows it’, ‘knows it and says it’, ‘does not know it’) and parents are asked to respond to each item according to their children’s knowledge of these words. The Arabic CDI questionnaire contains 100 words divided into the following categories: voices of animals, names of animals, transport, toys, food and drink, clothes, parts of body, home furniture, little things inside the house, things and places outside the home, people, games and daily routine, actions, time-related words, adjectives, pronouns, question words, prepositions, and number formulas.&#13;
Media exposure questionnaire: Following the language questionnaire, parents completed a second survey measuring their children’s screen viewing, stating how many hours per day they spent watching a screen. Parents were asked to report frequency of screen use by choosing among the following six options: None, 0 to 1 hour, 1 to 2 hours, 3 to 4 hours, 5 to 6 hours, and &gt; 6 hours. Participating parents were then asked to state what length of time they would consider it appropriate for their children to watch a screen, with the same set of responses available to them. There was then an item asking the parents whether they were making any efforts to reduce their children’s screen time, such as setting specific days or times for viewing or preventing them from viewing their screens while eating or in the bedroom, for example.&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2785">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2786">
                <text>Kristy Dunn</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2787">
                <text>100 </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2788">
                <text>correlation and regression. </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="103" public="1" featured="0">
    <fileContainer>
      <file fileId="89">
        <src>https://www.johnntowse.com/LUSTRE/files/original/34148e3407b9c0eff7bbfd24ea45f258.pdf</src>
        <authentication>8bf0a71e67d6b8bccdd3c9eab3018e30</authentication>
      </file>
      <file fileId="90">
        <src>https://www.johnntowse.com/LUSTRE/files/original/57f56b407c25ca30d5bbb61a71f67ef5.pdf</src>
        <authentication>8cc46cdf06fad05b67a93d11cd3d9bab</authentication>
      </file>
      <file fileId="104">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d28a0639041b80f3a7bcad46fb7ab338.csv</src>
        <authentication>731681121cc89fc2f5bc38995013977e</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2330">
                <text>Do foetuses have the ability to retrieve and retain information presented by both the mother-to-be and partner?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2331">
                <text>Hope Butler </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2332">
                <text>08/09/2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2333">
                <text>The complex phenomenon of language development is a vital criterion for communication and for the strengthening of the attachment bond between caregiver and baby (Chew &amp; Ng, 2021). The period in which humans begin to process speech is difficult to define; previous research has identified that foetuses that have the capacity to retain linguistic information presented to them over six weeks by their mother-to-be show a preference for this information postnatally (DeCasper and Spence, 1986). However, the language environment of the foetus also likely incorporates that of the secondary carer role and very little research has investigated the role of the partner in influencing language retention. This study aims to investigate the extent to which foetuses have the ability to retrieve and retain linguistic input presented to them by both their mother-to-be and their partner. A within-measures design with two participant pairs who were recruited via opportunistic sampling through Lancaster University’s Babylab was conducted. Participants were asked to record themselves reading “The Cat in the Hat” and play both of the recordings to the foetus every day for two weeks. During these sessions, the mother-to-be was required to count the frequency of kicks and the movement intensity per session. The findings concluded that foetuses can retrieve and retain language that is presented over a two-week period at only 32 weeks’ gestation. Foetal kicking decreased significantly as exposure to recordings increased. This provides evidence of online processing of linguistics at 32 weeks’ gestation, implying that the full six-week exposure, as previous research indicated, is not necessary thus providing evidence of an innate processing of language. Although there is scope of environmental influence on this. No significant impact of parent recording on foetal ability to process language was found. This suggests that humans have an innate ability to process linguistic information which is despite levels of exposure to voice. However, this conclusion is based on a null hypothesis in an underpowered study; it would be very beneficial for further research to use a larger sample size to increase statistical power and be more representative of the general public. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2334">
                <text>language, foetus, mother-to-be, partner, retention </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2335">
                <text>Methods: &#13;
Ethics Statement: &#13;
Ethical approval was granted by Lancaster University Psychology Department on the 12th of April 2021 before any data collection was completed. Participants were provided with information for review and asked to complete an informed consent form online before participating within this experiment and were given the option to withdraw at any point of the study. &#13;
Participants: &#13;
Three mothers-to-be and their partners were recruited via opportunistic sampling through Lancaster University BabyLab social media (http://wp.lancs.ac.uk/babylab/) and word of mouth in exchange for a £5 book voucher from Waterstones. In order to take part in this research mothers-to-be must live with a partner and also have a foetal gestation age of between 32-34 weeks. If any participants were bilingual, they were asked to record the story in English to ensure reliability. &#13;
Materials and Measures: &#13;
Due to current restrictions because of COVID-19, this experiment took place online. Participants were sent an email containing a link to a Qualtrics survey. Qualtrics is a software that allows participants to access surveys and questionnaires on all digital devices at any point of time to help aid easy distribution. &#13;
Qualtrics Survey: &#13;
To complete this survey, participants were required to have access to a mobile device or computer. The Survey contained the information sheet, consent form, instructions, and demographic questions. The questionnaire consisted of questions asking the mother-to-be to rate the intensity of movements and state the frequency of kicking per session (see Appendix A). The intensity of kicks were recorded using a scale bar where mothers-to-be could rate the intensity of the kicking per session (0-100). &#13;
Recording “The Cat in the Hat”: &#13;
Participants were given a copy of an extract from “The Cat in the Hat” and both the mother- to-be and their Partner were asked to record themselves reading the story aloud using a device that they were then able to play the recording on every day for two weeks. This was estimated to take between five and ten minutes depending on reading speed per participant. &#13;
To control the decibel of their recordings, parents were advised to download the “Decibel X” app which can monitor sound level in order to keep it at the recommended 90db (Luu, T, 2011). Also, to help mothers track their foetal kicks, they were also advised to download the NHS “kicks count” app which helped mothers-to-be accurately count the frequency of kicks per story session. &#13;
At the end of the survey and after completion of the study, participants were given a debrief sheet which contained the aims of the research study and any contact information they might need for further questions. &#13;
Design: &#13;
This research study used a within-measures design as all participants took part in all sections of the experiment. The independent variables in the study were the parent reading the story which has two levels; the mother-to-be or their partner. The second independent variable related to the time point from day one until the end of the two weeks. The dependent variables were the frequency and intensity of foetal kicking during the exposure to both the mother-to- be and their partners recording of “The Cat in the Hat”. This was measured by the mother-to- be. &#13;
Procedure: &#13;
Once they had consented, the mother to be and partner were given an extract from the story “The Cat in the Hat” via the Qualtrics survey and were asked to record themselves individually reading the story on a device that they were able to play back on several occasions. Once the story had been recorded by both the mother-to-be and their partner, they were asked to play the story to the foetus every day for two weeks. The order of presentation was counterbalanced. &#13;
While the study was being recorded, the mother was required to monitor the intensity and frequency of kicks that occurred for the duration of the auditory exposure. The mother-to- be was told to do this for both the duration of exposure to the recordings every day and then upload the outcomes for each session using the original Qualtrics survey link. &#13;
Analysis: &#13;
Rstudio is a professional software that allows for programming statistical analysis, production of graphs and tables that were used to analyse that data collected. Linear mixed effects model was conducted for analysis. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2336">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2337">
                <text>data/r.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2338">
                <text>Butler2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2339">
                <text>Rebecca James and Livvi Taylor</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2340">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2341">
                <text>No Relation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2342">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2343">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2344">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="102" public="1" featured="0">
    <fileContainer>
      <file fileId="69">
        <src>https://www.johnntowse.com/LUSTRE/files/original/95005cf8d8749a05d25303ac63248ba7.pdf</src>
        <authentication>30840414bccfa352a460d451969fdc9f</authentication>
      </file>
      <file fileId="71">
        <src>https://www.johnntowse.com/LUSTRE/files/original/174819714ee258dfb13c0fa7a6ace304.csv</src>
        <authentication>b6253d1266ff4742351c2d4c4f8a73c4</authentication>
      </file>
      <file fileId="72">
        <src>https://www.johnntowse.com/LUSTRE/files/original/8e409cbed00f3a76a3d22f879e2a2f34.csv</src>
        <authentication>daeb9c288d735fb09af0501cee1095a4</authentication>
      </file>
      <file fileId="73">
        <src>https://www.johnntowse.com/LUSTRE/files/original/f23857f7385dc20a585dbf7e73125224.csv</src>
        <authentication>44e19fb4059e8badfced3d17ca965b8c</authentication>
      </file>
      <file fileId="74">
        <src>https://www.johnntowse.com/LUSTRE/files/original/16b37ffbcc5b229554cf3d83269cb255.csv</src>
        <authentication>e017b75af5cef3fd4f6aff3c9addce1c</authentication>
      </file>
      <file fileId="75">
        <src>https://www.johnntowse.com/LUSTRE/files/original/e801d8f94428315c35ca6cb346277f2a.csv</src>
        <authentication>3798e9838e1c4b39a4550698cacb927d</authentication>
      </file>
      <file fileId="76">
        <src>https://www.johnntowse.com/LUSTRE/files/original/213b2ed462d59dbc519e38d61bd28ce0.csv</src>
        <authentication>99511941f8d43496073e0ddb9c73955c</authentication>
      </file>
      <file fileId="79">
        <src>https://www.johnntowse.com/LUSTRE/files/original/1da7d953dc7cb556a9ef8b6fa9d144a0.doc</src>
        <authentication>df30a3da04823c1894e534bd62de7b14</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2316">
                <text>Running Memory Span Development: The Input Mechanism and Hebb effect</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2317">
                <text>Yu Xie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2318">
                <text>2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2319">
                <text>It is unclear whether active strategy or passive strategy is used and whether the Hebb effect is elicited in the running memory task. The aim of this study was to explore the input mechanism and the Hebb effect in the running memory task via a developmental study. Children were asked to perform four working memory tasks: counting span task, free recall task, Hebb digit task, and running memory task. In order to explore the Hebb effect in the running memory task, the last three digits of every third list were repeated. The results suggested that running memory was a recency-based phenomenon and the Hebb effect is elicited in children. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2320">
                <text>Participants &#13;
Fifty-seven Chinese primary school students (23 female, 34 male), aged between 7 and 13 years (Mean = 9 years 6 months; SD = 1.754) took part in the present study. The children were recruited from Grade one to Grade six at Tianyi School in Xuancheng City. Chinese was the first language of all children. All the children completed a 45-minute testing session, which involved four memory tasks. At the end of the test, children received a notebook as a small gift of appreciation for taking part in the present study. &#13;
Materials &#13;
The experiment was presented using SuperLab 4.0 on a Sony Laptop with a 14-inch colour screen. The responses of participants were recorded by the tester on answer sheets. Every child completed a counting span task, a free recall task, a Hebb digit task, and a running memory task.&#13;
Counting span task. The counting span arrays were developed from Towse and Hitch (1995) and consisted of equal number of target triangles and non-target squares. The target triangles were red, approximately 30 mm in length, and the non-target squares were blue, approximately 28 mm in length. The number of both target triangles and non-target squares varied from 3 to 9 (mean = 6). The counting span arrays were presented on the centre of the computer screen with a white background. The triangles and squares were randomly displayed at different positions in every display.  &#13;
Free recall task. For this task, 144 Chinese high-frequent two-syllable nouns (see Appendix A) were recorded by in a male’s voice at rate of 1 word per second. The words were recorded using Adobe Audition 3.0. Two practice lists and ten test lists were presented, and every list included 12 words at the rate of 1 word per second. The words were played by a computer.&#13;
Hebb digit task. All digit lists were created had the digits 1 to 9 in random order, avoiding any repetition of digits (see Appendix B). The voice of digits was recorded by Adobe Audition 3.0 at the rate of 1 digit per second. There were 2 practice lists and 24 test lists, and each list contained nine digits. Among the test lists, 16 lists were different, and the other 8 were the same – termed as Hebb list – presented on every third trial beginning on Trial 3. The 24 test lists were divided into 8 blocks, which involved 2 different lists and a Hebb list. &#13;
Running memory task. The lists included 12, 14, 16, 18, or 20 random digits from 1 to 9 (see Appendix C), which were recorded by voice. Two presentation rates were used in this task: 0.5 s per digit as the fast rate and 2.5 s per digit as the slow rate. In both conditions, there were 2 practice lists and 24 test lists. In order to test the Hebb effect in running memory task, the 24 test trials comprised 16 completely different lists, and 8 lists with the same last 3 digits which were the same and presented on every third trial. &#13;
Procedure &#13;
The experiment lasted 45 min, and every child completed 4 tasks. Each participant was seated on a chair in front of the computer screen, at a distance of 65 cm. All tasks included two practice trials for helping children be familiar with the procedure. Once children completed the practice trials and understood the procedure, they could proceed to the test trials. When children were performing the tasks, the experimenter gave no feedback about the accuracy of the words or digits. The order effect was counterbalanced as shown in the Table 1, which is a Latin Square design. Because there were two conditions in the running memory task, the fast speed and slow speed running, the tasks were counterbalanced. Therefore, in all, there were eight orders in the present study, and all children were equally divided into eight groups based on the eight orders. When participants completed each task, they were given sufficient time to rest. &#13;
Counting span task. The children were informed to the counting and recall tasks. Before every trial, a fixation symbol was displayed on the centre of screen for 0.5 s. When the target triangles and non-target squares were presented, participants were required to count the red triangles aloud, and repeat the final number. Once the children repeated the last number, the experimenter pressed the keyboard to show the next display, and the counting speeds were recorded by the computer automatically. There were three trials in every level and every trial included the n + 1 displays in level n. For example, participants counted 2 displays in level 1 and 3 displays in level 2. The final level was level 4, which contained 5 displays. After 2 to 5 displays, children were asked to report all the final numbers of red target triangles in the previous displays. If a child failed to recall correctly for at least two of the three trials, the counting span task was ended at that level; otherwise, they could progress to the next level. &#13;
Free recall task. Children were required to listen to some words, and repeat them as many as possible in any order, after the 12th word. The experimenter wrote down the responses of participants on answer sheets. If the children could not report a new word within 30 s, the experimenter would proceed to the next trial. &#13;
Hebb digit task. The procedure for the Hebb digit task was developed by Hebb (1961). Children were asked to listen to every list, and report all digits in the right order. Children reported the digits orally, and the experimenter recorded the response on an answer sheet. Because the running memory task also involved Hebb lists, 48 children were asked whether they were aware of any regular pattern in the digit tasks after they completed both Hebb digit task and running memory task. Only 5 participants noticed the repetition in the running memory and Hebb digit tasks.&#13;
Running memory task. Children were made to listen to some digits, different from those in the Hebb digit task; they were required to repeat the last three digits rather than all digits in the list. Two conditions were set to counterbalance the order effect: half of the children were administered the fast rate condition first and the other half were administered the slow rate condition first.&#13;
Scoring&#13;
Counting span task. Counting errors and counting speed were recorded and the scoring method used is the partial-credit unit scoring prescribed by Conway et al. (2005). Firstly, the correct items in each sequence were counted. If all items were correct in a sequence, this sequence was given one point. Otherwise, the score of a sequence was based on the proportion of correct items. Finally, the counting span of a participant was calculated as the sum the scores for all sequences. &#13;
Free recall task. The scoring method used was the one prescribed by Tulving and Colotla (1970), which involved the calculation of intratrial retention interval (ITRI). The ITRI value was the number of items between the presentation and the reported items. For instance, if the sequence is A, B, C, D, E, F, and G, and a participant reported G, F, and A. The ITRISs for the items were 0, 2, and 8, respectively. Before calculating the ITRI, the digit span of the Hebb non-repeating lists was calculated for every child. If the digit span of a child was 5, the item would be classified as a word from primary memory when the ITRI was 5 or less, whereas the item would be classified as a word from the secondary memory when the ITRI was 6 or more. &#13;
Hebb digit task. Every digit recalled correctly at the correct position was scored one point. The score of the non-repeating lists was the mean score of each non-repeating list, and the score of the repeating lists was the mean score of each repeating list. &#13;
Running memory task. The score for the running memory span was calculated using the mean number of digits in the right positions. If 3 digits were recalled in correct sequence, the score was 3; if the sequence of 2 digits (for example the first and second digit, the second and the third digit, or the first and third digit) was in the correct serial order the score was 2; if there was a single digit in the correct position, the score was 1. Similar to the Hebb digit task, the scores for non-repeating and repeating lists were separated.  &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2321">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2322">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2323">
                <text>Xie2013</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2324">
                <text>Rebecca James</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2325">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2326">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2327">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2328">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2329">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
