<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-xml&amp;page=6" accessDate="2026-05-03T02:50:21+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>6</pageNumber>
      <perPage>10</perPage>
      <totalResults>148</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="146" public="1" featured="0">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3021">
                <text>The validity of traditional readability tests on accurately predicting people’s comprehension of health information</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3022">
                <text>Jiawen Liu</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3023">
                <text>2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3024">
                <text>Tons of evidence indicated that readers benefit from clear and understandable health information in various contexts. Authors have been looking forward to utilizing a wide range of readability formulas so that they can produce comprehensible texts for readers. Both traditional readability formulas and the new Coh-Metrix algorithms have been widely used for decades and the utilities for the new tool were more likely to be supported by theoretical evidence. Nevertheless, there is still a lack of empirical evidence supporting the utilities of the two kinds of readability formulas. In this paper, a secondary data analysis was utilized to give empirical evidence to whether the widely used readability tests can predict participants’ comprehension responses effectively. By using Bayesian generalized linear mixed-effects models, variation in both traditional readability formulas and two of the new Coh-Metrix algorithms were tested having little or no effect on variation in participants’ comprehension accuracy. In this case, it is suggested that researchers in the future should think twice before utilizing the readability tests to analyse text difficulty.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3025">
                <text>Participants&#13;
Participants recruited in the original study were through the Prolific online platform. Participants recruited were all UK nationals who were aged eighteen or over and spoke English as their first language. Participants who completed the test battery were awarded £12.50 (equalling £6.25 per hour). All participants who volunteered were tested, with exclusion of participants whose reading times for the health-related information texts were recorded being below 30s. The reading time includes reading the text and answering the questions relating to that text, including the self-rated evaluation-of understanding probe. While participant recruitment was administered through the Prolific platform, response data collection was conducted through a Qualtrics survey for each study. &#13;
Design &#13;
Two studies were conducted in the original research. In Study One, participants were presented with a sample of written health information texts on a range of topics. The observation was replicated and extended in Study Two by presenting a sample of texts on a range of health topics, together with a sample of guidance texts on COVID-19. In both studies, participants were asked to complete four multiple-choice questions, each with three answer options, in response to each stimulus health text. After the comprehension test questions, participants were asked to rate how well they thought they understood the information in the guidance. The original dataset also included individual differences, including reading skill and knowledge, and collected information on text attributes. Responses by participants in terms of the comprehension of the four multiple-choice questions for each text and the individual differences, such as reading skills and knowledge, would be utilized in the current study analysis with more kinds of text attributes included. In sum, except the heath-text materials picked to test participants’ comprehension responses and participants chosen, all other variables and procedures were identical in both studies. Since the difference between the two datasets in the two studies was the inclusion of texts on COVID-19 in Study Two, and all variables included in both datasets were identical, both data were renamed as Dataset One and Dataset Two in order to more easily distinguish between the two. &#13;
Material &#13;
For Dataset One (Study One in the original data), 25 health-related information texts were collected from those available on NHS trust organization webpages. The texts collected were chosen from 115 candidate texts from those available among the web resources of a quasi-random sampling of 23 NHS England trusts (10% of the 228 total in England). For Dataset Two (Study Two in the original data), 14 texts concerning a range of health matters and 15 texts concerning COVID-19 or guidance relating to the public health response to the pandemic were collected. As in Dataset One, the general health texts were selected as a sub-set of a (fresh) pool of 115 candidate texts extracted from those available among the web resources of a (new) sample of 23 NHS England trusts. The COVID texts were selected from a pool of 115 candidate texts extracted from those available from gov.uk, charity (British Heart Foundation, Cancer Research UK), NHS UK, and NHS England trust webpages. The selection of texts, for both general health and COVID-19 information, was made so that the sub-set of items varied as widely as possible across the distribution of values (for each pool of candidates) on each critical text feature. For each text chosen, a set of four multiple-choice questions (MCQs) was constructed, each with three answer options, to testify participants’ comprehension levels. &#13;
Individual differences measured: vocabulary knowledge, health literacy, reading comprehension skill, and reading strategy: &#13;
Vocabulary knowledge. The Shipley vocabulary sub-test was used to estimate vocabulary knowledge (Kaya et al., 2012). Participants were required to choose the synonymous word from four alternatives to a target stimulus word in The Shipley test (the other three alternatives are semantically related or unrelated distractor words). Participants were associated with a test result corresponding to the total number of correct answers out of 40 multiple-choice items. &#13;
Health literacy. The Health Literacy Vocabulary Assessment (HLVA) was used to estimate health literacy. Participants were required to choose the synonymous word from four alternatives to a target stimulus word and all the items are under health contexts. Since the vocabularies presented were drawn from the health-care profession, the HLVA is designed to test participants’ background knowledge of health matters and is considered an index of health literacy. Participants were associated with a test result corresponding to the total number of correct answers out of 16 multiple-choice items. &#13;
Reading skill. The Qualitative Reading Inventory (Leslie &amp; Caldwell, 2017) was used to assess reading skills. Participants were asked to read a short factual text (compromised of 802 words) about the life cycle of stars and then answer two sets of 10 open-class questions related to the text, respectively. The questions not only included information that can be found explicitly in the text but also information that requires inference from background knowledge. Participants were associated with a QRI score corresponding to the total number of correct answers out of 20 open-class questions. &#13;
Reading strategy. A Reader-based standards of coherence measure published in a doctoral paper by Calloway (2019) was used to assess reading strategy. Participants were asked to complete a 5-point Likert scale based on their reading experience ranging from very untrue to very true. The scale includes 87 items and is supported to measure readers’ reading goals and learning strategies effectively. Participants were associated with a scale score corresponding to their response on the 87-item scale. &#13;
Text features measures: traditional readability tests scores, coh-metrics scores of the health-related information texts presented to participants: &#13;
Referential Cohesion. The Coh-Metrix tool was used to calculate the referential cohesion (co-reference) of texts. Referential cohesion emphasises the overlap degree of concepts, words, and pronouns between sentences and paragraphs. With the increase of the similarities of sentences and conceptual ideas within a text, it is easier for readers to make connections between ideas and sentences (Coh-Metrix, 2012). Nevertheless, low referential texts sometimes are necessary when readers are required to be more actively involved in comprehending a text (Coh-Metrix, 2012). &#13;
Deep cohesion. The Coh-Metrix tool was used to calculate the deep cohesion of texts. Deep cohesion refers to how well a text is tied together by an efficient number of cohesion ties, also called connectives (Coh-Metrix, 2012). The calculation of deep cohesion in a text is determined by the number of the connectives including time, causal, additive, logical and adversative connectives, which connect ideas and propositions and clarify relations in a text (R-Kintsch &amp; Walter Kintsch, 1998). Being able to utilize the connectives effectively helps to tie the information together; thus, it facilitates the readers’ understanding. &#13;
Flesch Reading Ease Score (FRE). The FRE (Badarudeen &amp; Sabharwal, 2010) is one of the traditional readability tests. The formula for the FRE is 206.835 - (1.015 * ASL) - (84.6 * ASW), where ASL represents the average sentence length and ASW represents the average number of syllables per word. The FRE evaluates texts on a 100-point scale and higher scores means that it is more difficult to comprehend the text. &#13;
The Gunning Frequency of Gobbledygook (FOG). The FOG (Roberts et al., 1994) is one of the traditional readability tests. The formula for the FOG is 0.4*(ASL + % polysyllabic words), where ASL represents the average sentence length. There is a minimum word count for the passages tested using FOG, more than 100 words, and the results given correspond to the education level that a reader needs to comprehend a text. &#13;
The Flesch–Kincaid Grade Level (FKG). The FKG (Woodmansey, 2010) is one of the traditional readability tests. The formula for the FKG is (0.39*ASL) + (11.8*ASW) - 15.59, where ASL represents the average sentence length and ASW represents the average number of syllables per word. The results given from the FKG provide a number indicating the specific grade that readers should achieve to comprehend the text, which ranges from grades 3 to 12. &#13;
Simple Measure of Gobbledygook (SMOG). The SMOG (McLaughlin, 1969) is one of the traditional readability tests. The formula provided is 1.043 * square root of (number of polysyllabic words * [30/number of sentences] + 3.1291). The SMOG also provides a school grade as a result, indicating the specific education level a reader should have to understand a text, and it was recommended by the National Cancer Institute as having a better performance than the other tests. &#13;
Demographic attributes. Participants’ demographic characteristics were recorded, including gender (coded: Male, Female, non-binary, prefer not to say), education (coded: Secondary, Further, Higher), and ethnicity (coded: White, Black, Asian, Mixed, Other). </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3026">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3027">
                <text>Data/Excel.csv&#13;
Data/R.r&#13;
Data/DS_Store</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3028">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3029">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3030">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3031">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3032">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3039">
                <text>Mistry, Daniel&#13;
Lin, Pei-Ying</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3040">
                <text>Liu2015</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3041">
                <text>Vocabulary knowledge, health literacy, reading comprehension skill, reading strategy</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3033">
                <text>Robert Davies</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3034">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3035">
                <text>Cognitive</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3036">
                <text>307 participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3037">
                <text>Bayesian analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="145" public="1" featured="0">
    <fileContainer>
      <file fileId="136">
        <src>https://www.johnntowse.com/LUSTRE/files/original/78ebb8c54e3cbdb306df0d2337a3ee7a.pdf</src>
        <authentication>eff2d992759a35de11f501a68f43047f</authentication>
      </file>
    </fileContainer>
    <collection collectionId="6">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="187">
                  <text>RT &amp; Accuracy</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="188">
                  <text>Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3001">
                <text>Age-Related Changes in the Attentional Modulation of Temporal Binding </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3002">
                <text>Jessica Pepper</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3003">
                <text>8th September 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3004">
                <text>In multisensory integration, the time range within which visual and auditory information can be perceived as synchronous and bound together is known as the temporal binding window (TBW). With increasing age, the TBW becomes wider, such that older adults erroneously, and often dangerously, integrate sensory inputs that are asynchronous. Recent research suggests that attentional cues can narrow the width of the TBW in younger adults, sharpening temporal perception and increasing the accuracy of integration. However, due to their age-related declines in attentional control, it is not yet known whether older adults can deploy attentional resources to narrow the TBW in the same way as younger adults.&#13;
This study investigated the age-related changes to the attentional modulation of the TBW. 30 younger and 30 older adults completed a cued-spatial-attention version of the stream-bounce illusion, assessing the extent to which the visual and auditory stimuli were integrated when presented at three different stimulus onset asynchronies, and when attending to a validly-cued or invalidly-cued location. &#13;
A 2x2x3 mixed ANOVA revealed that when participants attended to the validly-cued location (i.e. when attention was present), susceptibility to the stream-bounce illusion decreased. However, crucially, this attentional manipulation affected audiovisual integration in younger adults but not in older adults. Whilst no definitive conclusions could be drawn about the width of the TBW, the findings suggest that older adults have multisensory integration-related attentional deficits. Directions for future research and practical applications surrounding treatments to improve the safety of older adults’ perception and navigation through the environment are discussed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3005">
                <text>Ageing, attention, TBW, multisensory integration, stream-bounce illusion</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="3006">
                <text>Participants&#13;
This study used a total of 60 participants; 30 younger adults (15 males, 15 females) between 18-35 years old (M = 21.37, SD = 1.30) and 30 older adults (11 males, 19 females) between 60-80 years old (M = 67.91, SD = 4.71). This sample size was determined via an a-priori power analysis using the data of Donohue et al. (2015) and Chen et al. (2021), who conducted similar experiments (see pre-registration on www.aspredicted.com, project ID #65513). All participants were fluent English speakers. Participants were required to have normal or corrected-to-normal vision. Participants were ineligible to proceed with the experiment if they had a history or current diagnosis of neurological conditions (e.g. epilepsy, mild cognitive impairment, dementia, Parkinson’s Disease) or learning impairments (e.g. dyslexia), or had severe hearing loss resulting in the wearing of hearing aids.&#13;
Participants were recruited via opportunity sampling; the majority of younger participants were students at Lancaster University and were known to the researcher, whilst the majority of older participants were members of the Centre for Ageing Research at Lancaster University. All participants were able to provide informed consent. &#13;
&#13;
Pre-screening tools&#13;
Participants were asked to complete two pre-screening questionnaires using Qualtrics survey software (www.qualtrics.com), to assess their eligibility for the study.&#13;
Speech, Spatial and Quality of Hearing Questionnaire (SSQ; Appendix A; Gatehouse &amp; Noble, 2004). Participants rated their hearing ability in different acoustic scenarios using a sliding scale from 0-10 (0=“Not at all”, 10=“Perfectly”). Whilst, at present, no defined cut-off score on the SSQ is available as a parameter to inform decision-making, previous studies have indicated that a mean score of 5.5 is indicative of moderate hearing loss (Gatehouse &amp; Noble, 2004). As a result, people whose average score on the SSQ was lower than 5.5 were not eligible to participate in the experiment.&#13;
Informant Questionnaire on Cognitive Decline in the Elderly (IQ-CODE; Appendix B; Jorm, 2004). Participants rated how their performance in certain tasks now has changed compared to 10 years ago, answering on a 5-point Likert scale (1=“Much Improved”, 5=“Much worse”). An average score of approximately 3.3 is the usual cut-off point when evaluating cognitive impairment and dementia (Jorm, 2004), therefore people whose average score was higher than 3.3 were not eligible to participate in the experiment. &#13;
The mean scores of each pre-screening questionnaire are displayed in Table 1. An independent t-test revealed that there was no significant difference between age groups on the SSQ questionnaire [t(58) = -1.15, p=.253]; however, there was a significant difference between age groups on the IQ-CODE questionnaire [t(58) = -13.29, p&lt;.001].&#13;
Table 1&#13;
Mean scores on the SSQ and IQ-CODE pre-screening questionnaires, for both younger and older adults. Standard deviations displayed in parentheses.&#13;
Age group	SSQ	IQ-CODE&#13;
Younger	8.34&#13;
(1.10)	1.74&#13;
(0.51)&#13;
Older	8.67&#13;
(1.13)	3.03&#13;
(0.09)&#13;
&#13;
&#13;
Experimental Design&#13;
This research implemented a 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(Stimulus Onset Asynchrony [SOA]: Visual Only [VO] vs 0 milliseconds vs 150 milliseconds vs 300 milliseconds) mixed design, with Age as a between-subjects factor and Cue and SOA as within-subjects factors.&#13;
The experiment consisted of 16 different trial conditions (Table 2), randomised across all participants. Replicating the paradigm used by Donohue et al. (2015), the experimental block contained 72 validly-cued trials and 24 invalidly-cued trials, which were equally distributed between each side of the screen (left/right) and SOA conditions; this means that each participant completed 144 valid trials and 48 invalid trials for each SOA.  &#13;
&#13;
&#13;
Table 2&#13;
Number of trials within each Cue and SOA condition. All participants completed a total of 768 trials.&#13;
SOA (ms)	Cue&#13;
	Valid (Left)&#13;
N	Valid (Right)&#13;
N	Invalid (Left)&#13;
N	Invalid (Right)&#13;
N&#13;
0	72	72	24	24&#13;
150	72	72	24	24&#13;
300	72	72	24	24&#13;
VO	72	72	24	24&#13;
&#13;
&#13;
Stimuli and Materials&#13;
Participants completed the experiment remotely, in a quiet room on a desktop or laptop computer with a standard keyboard. All participants were asked to wear headphones/earphones. A volume check was conducted at the beginning of the experiment; participants were presented with a constant tone and asked to adjust the volume of this tone to a clear and comfortable level. &#13;
The stimuli used in the task were replicated from Donohue et al. (2015). Each trial started with an attentional cue in the centre of the screen – a letter “L” or a letter “R” instructing participants to focus on the left or the right side of the screen. In addition to this, 2 pairs of circles were positioned at the top of the screen, one pair in the left hemifield and one pair in the right hemifield. The attentional cue lasted for 1 second, and 650 milliseconds after this cue disappeared, the circles in each pair started to move towards each other downwards diagonally (i.e. the two left circles moving towards each other and the two right circles moving towards each other). &#13;
In the trials, one pair of circles moved towards each other, intersected, and continued on the same trajectory (fully overlapping and moving away from each other). This full motion of the circles formed an “X” shape, with the circles appearing to “stream” or “pass through” each other. On the opposite side of the screen, the other pair of circles stopped moving before they intersected, forming half of this “X” motion. On 75% of the trials, the full “X”-shaped motion appeared on the side of the screen that the cue directed participants towards (validly-cued trials); on the other 25% of trials, the full motion occurred on opposite side of the screen to where the cue indicated, and the stopped motion occurred at the cued location (invalidly-cued trials).&#13;
In addition to these visual stimuli, on 75% of the trials, an auditory stimulus was played binaurally (500Hz, 17 milliseconds), either at the same time as the circles intersected (0ms delay), 150ms after the intersection or 300ms after the intersection. The remaining 25% of the trials were visual-only (i.e. no sound was played). Participants were told that regardless of whether a sound was played, they must make their pass/bounce judgements based on the full motion of the circles (the “X” shape), even if the full motion occurred at the opposite side of the screen that they were attending to. &#13;
The experiment ended after all 768 trials – participation lasted approximately 1 hour. The experiment was built in PsychoPy2 (Pierce et al., 2019) and hosted by Pavlovia (www.pavlovia.org). &#13;
&#13;
Procedure&#13;
Prior to the experiment, a brief meeting was organised between the participant and the researcher via Microsoft Teams, to explain the task and answer any questions. Participants were emailed a link to a Qualtrics survey, which included the participant information sheet, consent form, demographic questions and pre-screening questionnaires. If the person was deemed eligible to take part in the experiment, Qualtrics redirected participants to the experiment in Pavlovia.&#13;
Participants were then presented with instructions detailing the attentional cue elements of the task and asking them to base their judgements on the full X-shaped motion of the stimuli. Participants were asked to press M on the keyboard if they perceived the circles to “pass through” each other or press Z if they perceived the circles to “bounce off” each other, answering as quickly and as accurately as possible. &#13;
Participants completed a practice block of 10 trials, then the test session commenced. After each set of 10 random trials, participants had the opportunity to take a break. Participants were provided with a full debrief upon completion of the experiment, and all participants could enter a prize draw to win one of two £50 Amazon vouchers.&#13;
&#13;
Statistical Analyses&#13;
This study required two separate mixed ANOVAs to analyse main effects and interactions, investigating significant differences between groups and conditions.&#13;
Reaction Times. &#13;
For the first dependent variable of reaction times (RT), mean RTs were calculated for each participant in each Cue x SOA condition, representing the time taken, in milliseconds, for each participant to press M or Z on the keyboard at the end of each trial. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(SOA: 0ms vs 150ms vs 300ms x Visual-Only) mixed ANOVA was then conducted on these mean RTs. &#13;
Bounce/Pass Judgements. &#13;
For the second dependent variable of the bounce/pass judgements, the percentage of “Bounce” responses provided in each Cue x SOA condition was calculated for each participant. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 3(SOA: 0ms vs 150ms vs 300ms) mixed ANOVA was then conducted on these percentage data. Visual-Only (VO) trials were compared separately for valid and invalid conditions using a paired samples t-test. Post-hoc paired samples t-tests were also used to investigate significant differences between the 0ms, 150ms and 300ms SOA conditions. &#13;
Bounce/Pass Judgements: Pairwise comparisons. To analyse pairwise comparisons in the significant interaction of Age and Cue, responses in each SOA condition were collapsed – that is, a grand mean percentage of “Bounce” responses was calculated by averaging the percentage of “Bounce” responses in the 0ms, 150ms and 300ms trials in the Valid condition and in the Invalid condition. This produced an overall Valid and an overall Invalid mean percentage of “Bounce” responses for each participant. A 2(Age: Younger vs Older) x 2(Collapsed Cue: Valid vs Invalid) mixed ANOVA was conducted on this collapsed data to investigate differences between the proportion of “Bounce” responses in the Valid and Invalid condition for younger adults, and in the Valid and Invalid condition for older adults. In addition, 2 separate one-way ANOVAs were conducted on this collapsed data (Age as the between-subjects factor, and Valid or Invalid as the within-subjects factor) to investigate differences between younger and older adults in the Valid condition, and differences between younger and older adults in the Invalid condition (Laerd, 2015). &#13;
Significance. &#13;
An alpha level of .05 was used for all statistical tests. Any responses (judgements or RTs) that were ±3 standard deviations from the mean were considered anomalous and were removed from the analyses. Mauchly’s test of sphericity was violated for the main effect of SOA, therefore Greenhouse Geisser adjusted p-values were used where appropriate. As an a-priori power analysis determined the desired sample size for this study, and this sample size was achieved, non-significant results will not be due to the study being underpowered. Statistical analyses were conducted using SPSS (version 25, IBM).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="3007">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3008">
                <text>Data/SPSS.sav; Data/Excel.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="3009">
                <text>Pepper2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3010">
                <text>Robert Taylor</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3011">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="3012">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3013">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="3014">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="3015">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="3016">
                <text>Dr Helen Nuttall</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="3017">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="3018">
                <text>Cognitive, Perception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="3019">
                <text>60 participants - 30 younger adults and 30 older adults</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3020">
                <text>ANOVA</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="144" public="1" featured="0">
    <collection collectionId="11">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="987">
                  <text>Secondary analysis</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2981">
                <text>The Effects of Different Sleep Stages on Language Learning Tasks in Young Adults</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2982">
                <text>Carly Power</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2983">
                <text>2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2984">
                <text>In order to learn a language, one must practice multiple tasks, including speech segmentation and generalisation. Segmenting speech allows for the identification of words and learning the meaning as well as syntactic role of those words within phrases and sentences. Novel generalisation requires generalising over the structure of a new language not yet experienced. Frost and Monaghan (2016) showed that participants were able to use the same statistical information at the same time to complete both language tasks. They suggest that segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms. The role of sleep for learning to segment and generalise language is still unclear. Sleep affects memory consolidation, which is necessary for learning a novel language. This refers to the amount of sleep individuals get within their sleep cycle, yet it is unknown whether the duration of separate sleep stages has an effect. The declarative/procedural (DP) model by Ullman (2004) on learning provides distinctions in DP memory that associate with slow-wave sleep (SWS) and rapid-eye movement (REM) sleep respectively. SWS has a role in declarative memory processes, including memory for words and grammar. Rapid-eye movement (REM) sleep has a role in procedural memory processes, involving motor skills and coordination. Sleep spindle density should also be considered, as spindles are involved in offline information processing and information transfer. It was found that increased SWS and stage 2 spindle density have a positive effect on speech segmentation compared to generalisation. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2985">
                <text>Language learning, novel generalisation, REM, sleep, sleep spindle density, sleep stages, speech segmentation, SWS</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2986">
                <text>Participants &#13;
&#13;
The original experiment was completed by 54 participants, 8 males and 46 females, with an age range of 18-24-years-old (mean age = 18.52). All participants reported being native-English speakers, with no history of auditory, speech or language disorders known. All participants either received university course credit or £20 for completing the experiment. Observations may be excluded for the first linear mixed-effects model. Exclusions may come from participants in the sleep group who did not sleep during the permitted time. This is because the first analysis aims to compare sleep vs. wake. The same participants’ data will be kept for the other linear mixed-effects models which aims to compare duration of sleep stages. This research received ethical approval by Dr Padraic Monaghan and Lancaster University’s Psychology Department on 22/04/2021. &#13;
&#13;
Design &#13;
&#13;
This study had a between-participants design with two conditions: sleep vs. wake between training and testing, and test type. There were two test types of speech segmentation and novel generalisation. Participants were randomly allocated to the sleep or wake conditions, and split evenly, meaning 27 participants slept and 27 remained awake. This study had access to PSG data for 18 of the participants in the sleep group. All participants received both test types. All participants were provided with an information sheet and gave written consent before the study commenced. &#13;
&#13;
Materials &#13;
&#13;
Stimuli &#13;
&#13;
Using the Festival speech synthesiser (Taylor et al., 1998), speech stimuli were created that were based on similar stimuli used by Peña et al. (2002). This artificial training language contained monosyllabic items, of which there were nine (pu, ki, be, du, ta, ga, li, ra, fo), used to form three different non-adjacent pairings with three possible X items in-between (A1X1–3C1, A2X1–3C2, and A3X1–3C3) (Frost &amp; Monaghan, 2016). Using Peña et al.’s (2002) study, A and C items contained plosive phonemes (pu, ki, be, du, ta, go) and X items contained continuants (li, ra, fo). All AXC item strings had a duration of approximately 700ms. Any preferences – for dependencies not due to the statistical structure of the sequences – were controlled for by generating eight versions of the language. Each version had randomly assigned syllables to A and C items, and the same X items were used in all versions. These versions of the language were counterbalanced across both task types. When testing for novel generalisation, three additional syllables were used with continuant phonemes (ve, zo, thi) (Frost &amp; Monaghan, 2016). Research on the similarities in phonological properties of non-adjacent dependent syllables has shown that these similarities show support for acquisition of such nonadjacencies (Newport &amp; Aslin, 2004). Nonetheless, other research has found that they are not essential for language learning to occur (Onnis et al., 2004). Words in the same grammatical category tend to be coherent regarding phonological properties (Monaghan et al., 2007), so regardless of learning, this property of the artificial language used within this study is consistent with natural language, which allows for real-life implications. &#13;
&#13;
Training &#13;
&#13;
The speech stimuli were formed into a 10.5-minute-long continuous speech stream by stringing together the AXC words within the language. It was ensured that no Ai_Ci dependencies were repeated immediately after each other. The speech stream included 5s fades for the onset and offset of speech, which ensured that such a feature of speech could not be used as a language structure cue (Frost &amp; Monaghan, 2016). &#13;
&#13;
Testing &#13;
&#13;
Segmentation: part-words were trisyllabic items that were heard in the training speech stream but overlapped word boundaries. As such, part-words comprised of either the last syllable of one word and the first two syllables of the next (CiAjX), or the last two syllables of one word and the first syllable of the next (XCiAj). For all nine AXC items, both part-word types were created. 18 test pairs were constructed which participants listened to, by matching each part-word with its corresponding word (for example, the A1X2C1 item was paired with the X2C1A2 part-word) (Frost &amp; Monaghan, 2016). &#13;
Novel generalisation: nine forced choice tests included a rule-word which contained one of three novel syllables (ve, zo, thi) (AiNCi), where N is the novel syllable and a novel part-word. For each Ai_Ci dependency, each novel rule-word appeared once. Part-words were made of two syllables that were heard in the training task, in their respective positions, with the same novel syllable as in the rule-word sequence (Frost &amp; Monaghan, 2016). This novel syllable could appear in any position (first NCiAj, second XNAi, or third CiAjN) and each novel syllable occurred once in each of these positions. Rule-word and part-word novelty presence controlled for the effect of the novel syllable, yet the novel generalisation task still tested for generalisation of the non-adjacent structure of items within speech (Frost &amp; Monaghan, 2016). Randomisation of test-pairs in all conditions was ensured across all participants, including the position of the correct response in each test-pair, to reduce response bias. When listening to the test-pairs, items in each pair were separated by a 1s pause. All participants completed The Stanford Sleepiness Scale (SSS) (Hoddes et al., 1972). This was in order to note participant sleepiness before the period of sleep or wake. The SSS consists of one item on a scale of seven statements, within which participants were required to select one statement that best described their perceived level of sleepiness (Shahid et al., 2011) (see Appendix A). Participant responses in the testing task were excluded if 90% of responses were always “1” or “2”, or if responses alternated between “1” and “2”. &#13;
&#13;
Procedure &#13;
&#13;
The whole procedure lasted for a three-hour period. For the training task, all participants listened to the continuous stream of speech and were instructed to pay attention to the language and think of possible words it contains. After the training task was complete, participants were split into two groups for the sleep vs. wake condition. Half of the participants, the sleep group, were given an hour and 45 minutes to sleep. These participants slept at Lancaster University Psychology Department’s sleep lab, and their sleep was monitored using polysomnography (PSG). PSG and an Embla N7000 system can record the amount of time spent in each sleep stage, and sleep spindle density, with EEG sites: O1, O2, C3, C4, F3, and F4 referenced against M1 and M2. The other half of participants remained awake for the same duration, watching a non-verbal, emotionally neutral video with neutral music. The testing task was then given to all participants after the same amount of time, 15 minutes after the break period. All participants were then required to complete the testing forced choice tasks. Within each trial, participants listened to a test-pair of items and were instructed to select which item best matched the training language. A response of “1” for the first item or “2” for the second item on a computer keyboard was recorded. All participants listened to the speech using closed-cup headphones in a quiet room (Frost &amp; Monaghan, 2016). To test speech segmentation, participants completed a forced choice task on preference for word/part-word comparisons. To test novel generalisation, participants completed a similar forced choice task for rule-word/part-word preference.&#13;
&#13;
Data analysis&#13;
&#13;
Analysis included mixed-effects models to allow for random participant and item variability. As all participants responded to both task types, therefore multiple items, the likelihood of correlations in responses from the same participant and to the same item increases. Generalised linear mixed-effects allow for a more flexible approach compared to ANOVA, that can handle missing data better, without significantly losing statistical power. Participant and item variation, the effects of sleep/wake, test type, and sleep stage duration were all considered. The interactions between sleep/wake and test type, and sleep stage duration and test type, were also considered in separate models. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2987">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2988">
                <text>Data/Excel.csv&#13;
Data/Excel.xlsx&#13;
Analysis/r_file.R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2989">
                <text>Power2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2990">
                <text>Brad Hudson</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2991">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2992">
                <text>Secondary data analysis. Data were originally collected for the paper below, but they were not analysed by the authors.&#13;
Frost, R. L. A., &amp; Monaghan, P. (2016). Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech. Cognition, 147, 70- 74</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2993">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2994">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2995">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2996">
                <text>Prof. Padraic Monaghan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2997">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2998">
                <text>Cognitive, developmental, neuropsychology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2999">
                <text>54</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="3000">
                <text>Linear mixed effects modelling, correlation, sleep data analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="143" public="1" featured="0">
    <fileContainer>
      <file fileId="135">
        <src>https://www.johnntowse.com/LUSTRE/files/original/168c73959ed52a18ad7005f6a70fa065.csv</src>
        <authentication>d70674b2d31093cc490b1257b76ace7e</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2961">
                <text>Do trustworthiness judgements help people to recognise synthetic faces?</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2962">
                <text>Haisa Shan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2963">
                <text>8 September 2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2964">
                <text>Recent advances in digital image generative models have allowed for artificial creation of fake imagery such as synthesising highly photorealistic human faces. Style-based Generative Adversarial Networks (StyleGAN) is one of the most state-of-the-art generative models in this field, and has been widely used on facial image generation. However, with the increasing ease of using such image generative models, the security in many domains, such as forensic, border control and mass media, is vulnerable in front of the potential threats resulted from the misuse of image generative technologies. To date there has only been limited empirical research into the facial characteristics of StyleGAN-generated faces to support the design of detection methods against such synthetic faces. This study used StyleGAN2 (an improved version of StyleGAN) to generate faces and invited people to complete two facial image evaluation tasks, 1) Discrimination task, 2) Trustworthiness rating task. The study results demonstrated that, in the discrimination task, subjects had trouble recognising synthetic faces by direct/explicit judgement; while in the trustworthiness rating task, subjects perceived the synthetic faces as significantly more trustworthy than real faces. The study further analysed gender bias and ethnicity bias on the perception of facial trustworthiness, with results showing some differences between different levels of gender and ethnicity. In conclusion, people’s ability to recognise synthetic faces is poor, but it is possible that people rely on the perception of facial trustworthiness to discriminate synthetic from real faces. The findings in this study have implications for the development of detection methods against digitally generated faces.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2965">
                <text>StyleGAN, synthetic face, trustworthiness perception, facial trustworthiness</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2966">
                <text>Subjects and design&#13;
Three hundred and fifty-seven subjects (114 males, mean age = 25.2, SD = 5.8; 227 females, mean age = 25.0, SD = 6.3; 10 non-binary, mean age = 23.6, SD = 8.93) were recruited to complete an online survey test delivered on www.qualtrics.com. The responses of subjects who started but did not complete the online survey were eliminated to avoid distorting the research results. We used computer-synthesised facial images in this research as fake faces, mixed with real faces to examine people’s ability to detect fake faces and perceptual differences of trustworthiness between real/fake faces. Subjects did not get rewards for their participation, though they could see the test score of their performances at the end of the survey. The Qualtrics survey was based on a within-subjects design in which all subjects viewed the same two sets of adult facial images and completed each of the two tasks. To eliminate the effect of between-sets difference, the use of each image sets was counterbalanced in the individual test for each subject. Before the survey started, all subjects provided informed consent and completed a demographic questionnaire about their age, gender, ethnicity. In terms of the experimental power of 0.8 and significance level of 0.05, with a small effect, the power calculation indicated that the study needed at least 198 subjects.&#13;
Stimuli&#13;
A total of thirty-two human facial images (1024×1024 resolution), including 16 real and 16 synthetic faces, were used as stimuli in the survey. All real faces were taken from a publicly available dataset for high-quality human facial images, Flickr-Faces-HQ (FFHQ), which is created as a benchmark for GAN (see https://github.com/NVlabs/ffhq-dataset), and all synthetic faces were gained from the dataset of the generative image modeling, StyleGAN2 (see https://github.com/NVlabs/stylegan2). To ensure a diverse dataset, in each of the two sets of faces, there were 4 Black, 4 East Asian, 4 South Asian, 4 White, and 2 males and 2 females for each ethnicity. Among the sixteen faces of each set, half of them were real and half were synthetic, but this was unknown to subjects.&#13;
Procedure&#13;
First, subjects completed a short questionnaire for demographic information (age, gender, ethnicity), and subjects had to be 18 years of age or older to take part. Prior to the main body of test, there was an example of real and synthetic faces presented to provide subjects with a general impression of what real and synthetic faces look like. Subjects then were asked to complete two face evaluating tasks, 1) Discrimination Task, 2) Trustworthiness Rating Task. The two tasks were presented to subjects in a counterbalanced order to check for any possible order effects. Before the start of each task, participants were informed that they would see a series of 16 facial images, and that they had to carry out their evaluation following the instructions provided. In both tasks, only one image was presented at a time and individual images appeared in a random order.&#13;
In the discrimination task, participants made their decision between two choices, “real” or “synthetic”, to classify the 16 faces according to whether they thought the presented faces were real or not. Subjects did not receive immediate feedback during the task on the correctness of their classifications. In this task, subjects relied on direct/explicit judgments. In the trustworthiness rating task, subjects were required to rate how trustworthy they thought each of 16 faces looked using a 7-point Likert scale (1 = extremely untrustworthy; 4 = neither untrustworthy nor trustworthy; 7 = extremely trustworthy). We instructed subjects that they did not need to consider face authenticity in this task, and they could just assume that the faces shown to them were all of real people. Although there was no time limit to respond for trustworthiness rating, we encouraged subjects to rely on their intuitions and provide their responses to work as quickly as possible. In this task, we expected to trigger a relatively indirect/implicit approach to evaluate faces as compared to direct/explicit judgement on face authenticity, specifically by trustworthiness perception. At the end of the survey, subjects saw a result report of their own mean trustworthiness rating scores for real and synthetic faces, and their mean accuracy in classifying real and synthetic faces in the discrimination task.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2967">
                <text>Haisa Shan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2968">
                <text>data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2969">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2970">
                <text>Haisa Shan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2971">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2972">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2973">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2974">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2975">
                <text>None</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2976">
                <text>Sophie Nightingale</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2977">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2978">
                <text>Cognitive, Perception; Forensic; Social</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2979">
                <text>357 Participants</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2980">
                <text>ANOVA; Power Analysis; T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="142" public="1" featured="0">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2936">
                <text>Optimising the Use of Synaesthetic Metaphors in Advertising: The Roles of Metaphor Construction and Complexity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2937">
                <text>Emily Davenport</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2938">
                <text>06/09/2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2939">
                <text>Metaphors are commonly employed in advertising to increase its persuasive effects. Research suggests that metaphors are most effective when conveyed visually, however linguists believe that additionally providing a linguistic cue, designed to help metaphor interpretation, can increase their effectiveness. In addition, metaphors of medium complexity are believed to drive higher effectiveness than simpler or more complex metaphors. This research aims to investigate how these issues relate to synaesthetic metaphors, those that reference two sensory modalities. Participants were presented with print adverts, the visual and linguistic elements of which were adapted to contain literal messages or synaesthetic metaphors. Participants provided ratings of appreciation, purchase intentions, and perceived advert complexity. Synaesthetic metaphors were shown to produce significantly stronger persuasive effects, measured via appreciation and purchase intentions, when conveyed visually and when rated highly on complexity. Implications for advertisers, who wish to incorporate and optimise the use of synaesthetic metaphors in print advertising, are discussed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2940">
                <text>Metaphors; Synaesthetic Metaphors; Advertising; Persuasiveness</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2941">
                <text>Participants&#13;
This research recruited 122 participants via opportunistic sampling. Participants were native speakers of English aged 18 or over, with no history of disabilities in any of the sensory domains (sight, hearing, smell, taste and touch). Twelve participants were excluded due to incomplete survey responses and/or ineligibility according to the inclusion criteria, resulting in a sample of 110 participants (88 female, 20 male, 2 other; age: M = 38.11, SD = 18.60) who were randomly assigned to complete one of four surveys (see Design). The demographics per survey are detailed in Table 1. &#13;
&#13;
&#13;
Table 1&#13;
The Sample Size and Demographics Per Survey&#13;
N Gender Age&#13;
Male Female Other Mean SD&#13;
Survey 1 28 4 24 - 43.68 18.94&#13;
Survey 2 29 7 21 1 32.90 17.77&#13;
Survey 3 28 5 22 1 35.07 17.09&#13;
Survey 4 25 4 21 - 41.32 19.48&#13;
&#13;
&#13;
Materials &#13;
Advert Stimuli&#13;
The advert stimuli used in this research were gathered and modified by previous researchers at Francesca Citron’s laboratory (Chen, 2019; Pan, 2019). The researchers obtained real adverts containing synaesthetic metaphors from the dataset of Bolognesi and Strik Lievers (2018). These base adverts were labelled 1-8 (see Appendix A). The researchers produced three modified versions of each base advert. They edited the visual and linguistic elements, of product images and slogans respectively, to contain, or not contain, a synaesthetic metaphor, in accordance with the ‘Metaphor Category’ they represented.&#13;
One version of each base advert conveyed a synaesthetic metaphor in both the visual and linguistic advert elements (Visual-Linguistic SM; labelled “VL”). One version contained a synaesthetic metaphor in the visual, but not linguistic, advert elements (Visual SM Only; labelled “V). One version contained a synaesthetic metaphor in the linguistic, but not visual, advert elements (Linguistic SM Only; labelled “L”). The final version served as a control as a synaesthetic metaphor did not appear in the visual or linguistic advert elements (No SM; labelled “N”). These metaphor categories are illustrated by the example of Advert 2 (see Figure 1). In 2VL, the image displays a lemon wearing a studded mask whilst the slogan writes “A PLEASINGLY SHARP TASTE”. This synaesthetic metaphor, conveyed by the image and slogan, attributes the lemonade as having a sharp taste, which references the sensory modalities of touch (via “sharp” in the slogan, and the studded mask in the image) and taste (via “taste” in the slogan, and the lemon in the image). In 2V, the synaesthetic metaphor containing the image of 2VL is retained, however the slogan, “A PLEASINGLY SOUR TASTE”, no longer contains a synaesthetic metaphor since it a) is literal and b) only references one sense (via “sour taste”). In contrast, 2L retains the synaesthetic metaphor-containing slogan of 2VL (“A PLEASINGLY SHARP TASTE”) but contains a literal product image. The synaesthetic metaphor here therefore only appears in the linguistic advert elements. In 2N, the image of 2L and the slogan of 2V appear, meaning that a synaesthetic metaphor is not conveyed in either the visual or linguistic elements.&#13;
This process, of creating four versions per base advert, resulted in 32 advert stimuli. Within this, eight adverts, one per base advert, represented each metaphor category. The advert stimuli were labelled according to their base advert number (1-8) and their metaphor category (VL; V; L; N). For example, 1VL presents the version of base advert 1 belonging to the visual-linguistic SM category. The full stimuli set can be viewed in Appendix A. The synaesthetic metaphors constructed in the stimuli, and the sensory domains referenced (see Table 2), are briefly explained in Appendix B. All adverts were written in English and printed in full colour. &#13;
&#13;
Online Survey&#13;
This research used a modified version of a Qualtrics (Provo, UT) survey produced by Chen (2019) and Pan (2019). The original survey featured 11 bipolar Likert scales per advert stimuli, all intended to contain 5-points but with some mistakenly containing 7-points. This was corrected in the present research, with all scales measured 0-5. The first four scales, measuring “Appreciation”, asked participants whether they liked the advert (Agree – Disagree) and whether they perceived it as “Bad”–“Good”; “Unpleasant”-“Pleasant”; and “Unappealing”-“Appealing”. The two following questions measured “Perceived Complexity” and concerned participants’ perception of the advert as “Unclear”–“Straightforward” and as “Difficult to Understand”– “Easy to Understand”. The next three questions measured “Purchase Intentions”. In the original survey, these focused on the purchase intentions of the respondent. This was modified in this research, following Pan (2019) and Chen’s (2019) finding that purchase intentions were merged with appreciation in PCA, and the belief that personal factors influence purchase intentions (Habich-Sobiegalla et al., 2019). The current survey instead asked respondents whether others would like to purchase the product, soon and in the future, and whether the advert would make others more likely to purchase the product (“Disagree”-“Agree”). On the final two questions, measuring “Perceived Realism”, participants rated the advert as “Unrealistic”–“Realistic” and “Fictitious”– “Real”. This question set was presented per advert stimulus, resulting in a total of 88 questions per survey. &#13;
&#13;
Figure 1&#13;
The Four Versions of Advert 2&#13;
Table 2&#13;
The Sensory Domains Referenced by Each Advert, When Sensory Metaphors Were and Were Not Present &#13;
Sensory Domains Referenced&#13;
SM Present No SM Present&#13;
Source Target &#13;
Advert 1 Auditory Taste Taste&#13;
Advert 2 Tactile Taste Taste&#13;
Advert 3 Tactile Taste Taste&#13;
Advert 4 Visual Auditory Auditory&#13;
Advert 5 Visual Auditory Auditory&#13;
Advert 6 Visual Smell Smell&#13;
Advert 7 Auditory Taste Taste&#13;
Advert 8 Tactile Taste Taste&#13;
&#13;
Design&#13;
In an independent groups design, participants were randomly assigned to complete one of four online surveys. The independent variable was the metaphor category of each advert. Each survey presented eight adverts, one belonging to each of the eight base adverts and two belonging to each of the four metaphor categories. For example, Survey 1 presented two Visual-SM only adverts (Adverts 1 and 5), two Linguistic-SM Only adverts (Adverts 2 and 6), two Visual-Linguistic SM adverts (Adverts 3 and 7), and two No-SM adverts (Adverts 4 and 8), with one version of each base advert appearing only once. Table 3 lists the advert stimuli presented per survey. The four dependent variables, of ‘Appreciation’, ‘Purchase Intentions’, ‘Perceived Realism’ and ‘Perceived Complexity’, are further detailed in Materials and Variable Construction.&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
Table 3&#13;
The Adverts Displayed per Survey, In Order of Appearance&#13;
Survey 1 Survey 2 Survey 3 Survey 4&#13;
1V 3N 5VL 7L&#13;
2L 4V 6N 8VL&#13;
3VL 5L 7V 1N&#13;
4N 6VL 8L 2V&#13;
5V 7N 1VL 3L&#13;
6L 8V 2N 4VL&#13;
7VL 1L 3V 5N&#13;
8N 2VL 4L 6V&#13;
&#13;
&#13;
Procedure&#13;
The entirety of this study was completed on Qualtrics (Provo, UT). Participants were informed of the researchers' background and requirements, and briefed of their anonymity, confidentiality and right to withdraw (Appendix C), before providing informed consent (Appendix D). Participants declared their age and gender and confirmed that English was their native language and that they did not suffer from any sensory inabilities. Participants viewed each of the eight adverts in turn and answered 11 five-point Bipolar Likert scales per advert (see Materials, Survey). Finally, participants were debriefed, reminded of their terms of participation, and provided with further reading (Appendix E). The study took 10 minutes to complete.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2942">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2943">
                <text>Data/Excel.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2944">
                <text>Davenport2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2945">
                <text>Cameron Hoppu</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2946">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2947">
                <text>Follow up on previous research in Francesca Citron's lab</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2948">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2949">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2950">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2951">
                <text>Francesca Citron</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2952">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2953">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2954">
                <text>122, but 12 excluded so final sample of 110.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2955">
                <text>ANCOVA, ANOVA, Regression, and T-Test.</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="139" public="1" featured="0">
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2908">
                <text>The impact of retribution on perception of transgressor by others </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2909">
                <text>Olivia Wilson </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2910">
                <text>2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2911">
                <text>Emotions play a key role in within society, behaviour and human life with moral emotions such as guilt, regret and shame being able to influence individuals’ judgments and actions. For example, a person who experiences guilt will want to fix their wrongdoing that has caused this. There are times where these efforts to repair ones transgression, can lead an individual to self-punish in order to repair bonds with others and reduce negative consequences of the situation. The present study experimentally investigated the effect of self-punishment intensity on perceptions of a transgressor. Participants were randomly assigned to one of three conditions of self-punishment intensity (low, correct and high). Vignettes were manipulated for each condition and presented for participants to read for them to answer questions on their judgments of the transgressor (perceptions of guilt, shame, regret, moral character, and trustworthiness, their willingness to forgive the transgressor, how likely they thought they would reoffend in the future) and rated this on a Likert scale of 0-5. Participants allocated to low self-punishment had more negative perceptions towards the transgressor overall when compared to correct self-punishment. However, this was not found beyond this as no differences were seen for those within the high self-punishment condition </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2912">
                <text>Participants. Participants were recruited through the use of LU Sona system as well as opportunity sampling through use of social media and network platforms accessible. A total of 174 responses were collected via Qualtrics, of those 158 have been successfully completed through to the end whilst 16 have only been started and answered few questions at most. Therefore, the decision has been made to exclude any incomplete attempts. This resulted in a final sample of 158 of which 54 are in the high punishment condition, 52 in low punishment condition and 52 in correct punishment. &#13;
Design. This is a one-factor study with 3 levels (self-punishment: Low punishment, correct punishment, and high punishment) between-subjects design. Qualtrics randomly allocated participants to one of the three conditions. &#13;
Materials. A short hypothetical vignette was used to describe an event between two individuals; ‘Simon’ the transgressor and his friend, who he steals money from. With each of the punishment conditions, the vignette introduced the scenario with the same starting sentences to create the scene of someone performing a transgression against their friend with feelings of self-directed negative affect presented by the transgressor: &#13;
Simon is out with his friends when he noticed that a member of his group has left their wallet unattended. Simon helps himself to the £40 that was in the wallet. His friend eventually realises that the money has been stolen and seems distressed. The next day, Simon feels bad for his actions and confesses to his friend that he took the money. &#13;
The final sentence of the vignettes was manipulated for each of the three conditions. The sentence stated the amount of money returned to Simon’s friend, which was either less than originally taken (low punishment, £20), same amount (correct punishment, £40) or more than originally taken (high punishment, £60). &#13;
He gives his friend all the money he has in his wallet, which came to £20 (or £40, or &#13;
£60). &#13;
Hypothetical vignettes have been a popular method to explore social actions within research allowing actions to be explored in context to specific situations, people’s judgments, reactions and perceptions of the scenario being described and/or the individual people within the vignette. It allows this all to be clarified in the form of data collection and provides a less personal, and therefore less threatening way of exploring sensitive issues and topics in society (Barter &amp; Renold, 1999; Hughs, 1998; Schoenberg &amp; Ravdal, 2000). Vignettes are a valuable technique for exploring perceptions of situations and have been utilised previously in research on guilt and perceptions of a transgressor post-transgression (McLatchie, 2019; Manstead &amp; Semin, 1981; Dijk, de Jong &amp; Peters, 2009) and so have been utilised in this research of intensity of self-punishment post-transgression. &#13;
Empirical research has shown that emotions and perceptions of guilt specifically focuses attention on the behaviour and action that has occurred which has in turn elicited these feelings (Tangney &amp; Dearing, 2002). This is why the vignette in the present study was written with a particular emphasis on presenting the transgressor to be feeling remorse/guilt after failing to adhere to a social standard, being explicitly stated through acceptance of responsibility. This was done through stating that Simon ‘felt bad for his actions’, intentionally presenting to participants that, regardless of the punishment, Simon did know his behaviour was wrong. It can also be seen in this study through the motivations and efforts to recompensate the wrongdoing through his self-punishment and returning of a quantity of money. Absence of this could imply to participants a lack of emotional response, this could have impacted judgments on Simon regardless of the presence of punishment or not. &#13;
As stated previously, other emotions can be used synonymously within conversation when referring to guilt, such as self-conscious emotions like regret and shame; it was important to ensure that guilt was specifically being portrayed. McLatchie (2019) ensured this in his study investigating punishment types (no punishment, self-punishment, and other punishment). McLatchie used a vignette that described interpersonal violations as these are primarily associated with guilt than the other emotions. This is because it includes other individuals and not merely directed at the self where the common emotion that would most likely be triggered would be shame instead. Due to this, the present study also used a vignette that described an interpersonal violation of moral and social standards with the last sentence manipulated to present three self-punishment conditions based on varying intensities. These terms are popularly used interchangeably within conversation due to multiple similarities between them (Shen, 2018; Bhushan, Basu &amp; Dutta; 2020; Stearns &amp; Parrott, 2012), &#13;
Participants were then asked a series of questions which gathered information on the participants judgments of Simon. Participants were asked to rate the extent of the perceived guilt, shame, and regret of the transgressor as a third-party observer which keeps in line with current research which provides evidence for a strong internal consistency of these measures (McLatchie, 2019). It is also consistent with previous research where the same elements were combined to calculate an overall guilt score. This emphasised the importance of these emotional responses and behaviours that an individual may present when judging overall guilt being experienced by the perpetrator. How much the participant thinks Simon (the transgressor) deserves to be forgiven was also measured. This was done with an adapted version of Zhu et al.’s (2017) way of measuring this and has proved to be effective in prior research related to guilt and self-punishment (McLatchie, 2019). The final questions were – how likely the participants thought Simon would reoffend, and to what extent they thought the punishment performed was sufficient for the transgression committed. All answers were presented and rated on a Likert scale with the question above. &#13;
Procedure. Participants were invited to partake in a study aiming to evaluate a ‘social action’. Qualtrics was used to provide the survey to participants where they were asked to read through the vignette prior to moving through the questions and answers which measured their responses. As each question appeared, the vignette remaining at the top of the screen for reference throughout. Answers were presented on a 6-point Likert scale ranging from 0 (“Not at all”) to 5 (“Completely”) which they were required to choose their response through a rating. &#13;
Once participants completed this survey, a final section asked participants to provide demographic information with a full debrief. Demographic information included basic information such as the participants age and gender. Additional questions were included in order to gain an insight into the participants experience with situations such as the one described in the vignette and their personal experiences with guilt allowing any influences of the participants character to be seen when analysing results. These include being asked if they have ever had an experience as the protagonist (Simon in this case), someone who has been stolen from, and if they are prone to feelings of guilt. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2913">
                <text>Lancaster University </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2914">
                <text>Data R AStudio .csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2915">
                <text>Wilson2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2916">
                <text>Anastasija Jumatova</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2917">
                <text>Open (unless stated otherwise)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2918">
                <text>None (unless stated otherwise)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2919">
                <text>English </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2920">
                <text>Data and Text </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2921">
                <text>Tamara Rakic</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2922">
                <text>Masters</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2923">
                <text>Social </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2924">
                <text>158 participants ( 54 are in the high punishment condition, 52 in low punishment condition and 52 in correct punishment).</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2925">
                <text>Quantitative </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="138" public="1" featured="0">
    <fileContainer>
      <file fileId="132">
        <src>https://www.johnntowse.com/LUSTRE/files/original/a339e171ed4f4ad6da75e1f93c80db7c.pdf</src>
        <authentication>74c6799c7cc96af439fc872b4f1cc5f2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="10">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="819">
                  <text>Interviews</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2889">
                <text>Understanding the psychological, perceptual and emotional impact signage has on residents in a local community. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2890">
                <text>Alexander Wootton</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2891">
                <text>15/09/2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2892">
                <text>The placement of signage, street furniture and advertisements can have a profound impact on the appearance of a built environment. They play a vital role in shaping the cultural, physical and social identities that impact the perceptions that residents and other stakeholders hold towards local communities, which in turn impacts on behaviours. Adopting a qualitative approach, this study will examine the impact of signage and other visual features that can contribute to the psychological, perceptual and emotional impact that these elements can have on residents in a local community. A number of semi-structured interviews were conducted amongst residents in One Manchester property areas, One Manchester place officers and residents near these areas. Participants were shown a variety of visual images of signage and were prompted to discuss their emotional response and thoughts, and propose suggestions to improve signage. A thematic analysis was conducted using the interview data and indicated the following four themes: signage design, reputation, community engagement and impact of signage. Reflecting upon these themes, the results suggested that existing signage was psychically ill-fitted and visually dull, lacking positive influential stimuli and evocative colours and that it lacked the authenticity and character needed to emotionally resonate with passers-by. This negatively impacted the reputation of the communities, leading them to be categorised as economically poor with high crime rates, resulting in stakeholders feeling alienated and some fearful. The results highlighted that the signage needs to be revitalised as a part of a wider placemaking strategy to rejuvenate local environments, perceived to be run down. This should support the ongoing evolution of these areas and engage community members to instal signage that is both influential and reflects an overall collective vision.  &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2893">
                <text>signage, placemaking, community engagement, qualitative research, community reputation&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2894">
                <text>Design&#13;
Due to the need to gain an in-depth understanding of the psychological, perceptual, and emotional impact signage has on residents in a community and factoring in the Covid-19 pandemic, a qualitative approach was adopted consisting of semi-structured interviews. This style of interviews was considered the most suitable method as they provide rich data on the participant’s thoughts which are not constrained by the bounds of tick box exercises or strict discussion guides. They enable researchers to “assess, confirm, validate, refute, or elaborate upon existing knowledge and the discovery of new knowledge” (Mcintosh &amp; Morse, 2015, p. 1). This enables the discussion between the moderator and participant to flow more smoothly and naturally (Roulston et al., 2003) yet, a flexible guide at the moderators disposal keeps the conversation on topic. Interviews in the project were conducted using Microsoft Teams and telephone communication. The data was then assessed using Braun and Clarke’s (2006) six step thematic analysis.&#13;
Braun &amp; Clarke’s (2006) six-steps thematic analysis: &#13;
Familiarisation: Getting to know the overall data collected through re-reads of transcripts. &#13;
Coding: Reducing sentences and phrases into small fragments of meaning or “codes”.  &#13;
Generating themes: Identifying patterns among codes. &#13;
Review themes: Assuring that the meanings identified are relevant to the representation of data collected (research objectives). &#13;
Define themes: Refine themes developed by establishing their essence and significance. &#13;
Analysing themes: Highlight the frequency of themes and meanings derived from qualitative data analysis. Generate conclusions agreed-upon by all researchers.&#13;
&#13;
Participants&#13;
A sample of 24 participants was originally agreed, however, only 14 participants were interviewed for the project. Participants were either recruited by One Manchester or the lead researcher from areas across south, east and central Manchester. Participants were made up of the following:&#13;
&#13;
Eight One Manchester residents &#13;
Three One Manchester Place Coordinators who worked in specific patch areas&#13;
Three Local residents living in areas where One Manchester own property &#13;
&#13;
The lead researcher conducted site visits around areas of Manchester, this was done so the lead researcher could physically inspect communities to identify signage which were used to aid the discussion guide. The sites visits were conducted in Rusholme, Openshawe and Clayton. &#13;
&#13;
Visiting these locations first to view all the signage, symbols and other visual features was invaluable both to generating stimulus material for the interviews and the discussion guides. The aim of the sample was to gain a diverse range of viewpoints from a variety of demographics across Manchester to generate a rich data. Participants were recruited from: Clayton, Droysden, Fallowfield, Gorton, Hulme, Openshawe, Rusholme and Whalley range. A £20 shopping voucher was put forward to incentivise participation in the study. &#13;
&#13;
&#13;
Matierials &#13;
Interview guide &#13;
&#13;
To obtain the most effective feedback from participants, a discussion guide was created, which provided a structured framework to guide discussions (See Appendix A, see Appendix B for discussed images). When formatting the discussion guide, the lead researcher took into consideration current literature on signage and sought to examine resident’s attitudes, perceptions and behaviours in connection to signage in their local community. &#13;
&#13;
The discussion guide was composed of four sections:&#13;
Section 1:  Was a general introduction to the subject area and participants’ current awareness of signage and other visuals in their area.&#13;
Section 2: Heavily focussed on signage and other visuals gathered from site visits  In all of the interviews, participants were shown the images in the order reflected in Appendix B, and they will be asked the same set of questions in relation to each image in order to generate an in-depth discussion on such images. One Manchester and the lead researcher agreed participants would not be informed figures 1-4 were the perceived negative images and figures 5-8 were the perceived positive images.&#13;
Section 3: Focused on the future trajectory for signage and symbols. Participants were asked how their perceptions would be impacted if any of the discussed signage was placed in their areas now and in the future. Following this, participants were invited to share any recommendations into the designs of signage.&#13;
Section 4: This was only for One Manchester residents. They were asked questions about One Manchester’s performance and potential future actions with their communities. The section was designed to give residents an active voice in how One Manchester can strengthen their relations with residents and enact positive change to protect the future of local communities.&#13;
&#13;
Each question in the discussion guide was designed to be open-ended, to allow participants to have a wider scope and openly share their opinions. The guide was configured to offer flexibility to discuss topics, therefore when required the lead researcher altered the order and wording of questions to maintain the natural flow of discussion with participants.&#13;
&#13;
Procedure&#13;
&#13;
Interviews were carried out between June and August 2021. Participants were requested to share their opinions around a variety of topics concerning how signage in local communities impact a resident psychological, perceptual and emotionally. Before embarking with interviews, participants were provided an information sheet outlining the study procedure, purpose, confidentiality and their right to withdraw at any time of the study’s duration. If participants accepted the conditions to being interviewed and part of the project, a time was then arranged to administer the interview at the convenience of the participant. Nine of the interviews were overseen through Microsoft Teams, the remaining five were facilitated by telephone at the request of the participants. Before proceeding with the interview, the lead researcher pointed out again the aims of the project and received verbal permission to go ahead with the discussion. Interviews were expedited using the discussion guide to ensure interviews remained structured whilst probing concepts tied to the research question. Attention was devoted to each interview to give participants adequate flexibility to discuss matters significant to them not included in the discussion guide. When required, to guarantee ample depth, follow-up questions and prompts were employed to stimulate participants to delve deeper on essential and intriguing answers (DeJonckheere &amp; Vaughn, 2019). Field notes were developed during discussion, underlining both relevant and vital points, which enabled the researcher to refer to any major points and subsequently, assist them with data analysis (Rapley, 2004). As soon as all the questions had been completed, participants were promptly asked to share any other matters they deemed crucial. If participants were then satisfied with the feedback provided, the moderator would end the interview, and debrief participants about the study which was sent electronically. Discussions typically ranged between 30 minutes – 1 hour which were then all transcribed.&#13;
&#13;
Analysis &#13;
&#13;
As previously mentioned, Braun and Clarke’s (2006) six step thematic analysis was used to detect themes and patterns underpinning residents’ psychological perceptions, attitudes and behaviours towards signage in local communities. To support Braun and Clarke’s (2006) thematic analysis, a bottom-up analysis was utilised due to the project’s exploratory nature and this facilitates identification of themes that arise from consistent patterns within the data set. Firstly, after each interview was completed, the researcher instantly made notes of the key concepts and beliefs and then transcribed the discussion. To guarantee preciseness of the transcript and the lead researchers’ familiarity with the data content, audio recordings and transcripts were reviewed several times. Subsequently, the process to create codes began, the lead researcher analysed the data set and identified key extracts from the data on the basis of their significance and relevance which led to the creation of the codes. Thereafter, provisional themes were produced through a thorough examination of the coded data set, when shared patterns were discovered and judged to be similar or unified under a core notion. All codes were integrated into a central theme. From this, the provisional themes then were revised and reviewed to ensure the themes had remained articulated and unique. During this period, the coded excerpts linked to a core theme was re-examined to verify it could reinforce the central theme and they featured no inconsistencies with that theme (Braun and Clarke, 2006). By which time, a number of themes were either excluded or merged due the lack of sufficient data to uphold the theme. The procedure was repeated several times to consolidate relevancy of the themes to the research question whilst rigorously ensuring they mirrored the patterns found in the data set (Braun and Clarke, 2006). Ultimately, the final themes had been selected and a meticulous account of each theme was supplied. Once the thematical analysis process had been completed, extracts from the content were chosen to illustrate and support the relevant themes in the report &#13;
&#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2895">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2896">
                <text>Word doc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2897">
                <text>Wooton2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2898">
                <text>Joel Fox</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2899">
                <text>open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2900">
                <text>Consultancy - Commercial report</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2901">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2902">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2903">
                <text>Leslie Hallam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2904">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2905">
                <text>Psychology of Advertising</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2906">
                <text>14</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2907">
                <text>Qualitative (thematic analysis)</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="137" public="1" featured="0">
    <fileContainer>
      <file fileId="131">
        <src>https://www.johnntowse.com/LUSTRE/files/original/479c9a1888cc1f0fda97893b220919cd.doc</src>
        <authentication>666af35ed0df5544aff385f320bf5c81</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2869">
                <text>Exporing the Effect of Visual Complexity on Recall</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2870">
                <text>Hayleigh Proctor </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2871">
                <text>08/09/2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2872">
                <text>This study was conducted to explore the effect of visual complexity on an individuals` recall of product brands and their attributes in either simple or complex adverts . Within the field of visual complexity, there has been contradiction as to whether complexity helps or hinders recall, this study aims to resolve this question. A survey was conducted to measure their free and cued recall for adverts that varied in their visual complexity. The complex advertisements were defined as having three objects included whilst the simple advertisements had only one object included. This was decided to align with the industry standard for defining visual complexity as set by Attneave (1954), Snodgrass &amp; Vanderwart (1980) and Chikhman et al., (2012). A percentage scoring system was used to compare overall memory performance. The data showed that those in the simple condition performed better compared to those in the complex condition. However, this was not the case for every individual. The results found the effects of complexity to be marginally significant (p &lt; 0.09); however, the study had limited power, and a replication with a larger population could provide a more complete picture of the influence of the independent variable. Whilst this study does not provide a definitive conclusion towards the effect of visual complexity, it does explore and provide an insight into the effects of complexity on recall of product attributes in advertisements. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2873">
                <text>#visualcomplexity #recall #free-recall #cued-recall #advertisements #simple #complex</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2874">
                <text>PARTICIPANTS &#13;
The larger the number of participants in a study, the better-protected results will be from extraneous variables. For this reason, the participants were collected through random snowball sampling (Emerson, 2015). Each condition had 22 participants, a minimum age of 16 being the only participation condition. The participants were randomly allocated to each one of the four experimental conditions, providing 88 total participants (N= 88). There were no gender requirements for participation (Females (N = 47), Males (N = 31), Other (N = 4)). &#13;
The majority of participants were born in the U.K. (N = 46) or Poland (N = 35). The majority are currently residing in England (N = 57) or Poland (N = 21), but responses were still collected from further afield, such as France and the U.S.A. (N = 10). The majority of participants fell into the two youngest age categories, 16 to 18-year-olds (N = 22) and 22 to 27-year-olds (N = 37). &#13;
General demographic information provided insight into the advertisement exposure in participants' generic routines. The majority of participants were native English speakers (N = 49). The majority of participants use streaming services (N = 76), of which just under half of the respondents said their service had adverts (N = 38). Participants also use ad blockers (N = 49). Just over a quarter of participants use cable T.V. (N = 27). When asked whether they pay for premium applications, the majority said ‘never’ (N = 60), occasionally (N = 16), sometimes (N = 9), usually (N = 2), whilst only one participant always pays for premium applications (N = 1). &#13;
MATERIALS &#13;
Firstly, two product categories were chosen, bottled water and soap bars, four brands were then selected per category (see table 1). There were 16 advertisements in total, eight for the simple and complex conditions, respectively. (APPENDIX A) The editing software Gimp was used to design the advertisements to enable the selected products to be presented in the controlled advert setting. This 'controlled setting' ensured that the backgrounds were consistent across the adverts, e.g., they all used the same blue background. Additionally, no text or fonts were added, and the objects included had the same position as their counterparts. There were two experimental groups wherein participants were presented the advertisements. Within those two groups participants would view one of the product categories e.g., the water products. To account for confounding variables advertisements were counterbalanced, randomizing their order of appearance. Participants only saw one product category (e.g., soap or water) and one variation of the advert e.g., if they saw the simple A1 Aveeno advert, they were not be presented with the complex B1 Aveeno advert. If participants saw the complex B5 Buxton advert, they were not presented with the simple B1 Buxton version. If participants saw the soap adverts, they did not see the water and vice versa.&#13;
The web-based software Qualtrics was used to create the surveys (APPENDIX B) and a generalized report of the results. After extracting the data, SPSS was used to dummy code and manipulate the data to measure the effect of visual complexity on recall. &#13;
DESIGN &#13;
This experiment used a between-group design wherein participants were allocated either the simple or complex condition to examine which level of complexity had the larger effect (Turkeltaub et al., 2011). The type of complexity, simple or complex, is the independent variable of the experiment. The dependent variable is the effect this has on participants' recall (Atinc et al., 2011). In this project, simple advertisements are defined by having only one object included in the background, whereas complex advertisements are defined by having three objects. &#13;
Participants were first asked questions pertaining to free recall of product attributes before then being presented with the cued recall questions. This was to allow a distinction between non prompted (free) and prompted (cued) responses, enabling me to mark each survey and allocate a combined percentage recall score to each participant. &#13;
To control for confounding variables, the surveys were counterbalanced. Participants were shown the adverts randomly within each experimental group so that I could isolate the sequence effects that participants are exposed to. However, I could not control for extraneous variables such as the time of day participants completed the survey, their emotional state, or their level of intelligence. Additionally, situational factors such as the location they were in, e.g., whether the room they were in was too loud, too hot, too cold, could not be accounted for. &#13;
To prevent participants from rehearsing the material, distraction tasks were provided before requesting question responses (APPENDIX C). These were designed to be cognitively engaging by requiring participants to read sections of text and 'fill in' the missing words and select the 'odd word out' in a listing task. When completing these tasks, participants would not necessarily be aware that they were not an essential part of the study and thus, in processing their responses, would have to pause. For example, 'which word does not belong with the others?' had the response options of ‘Dog’, ‘Cat’, ‘Donkey’, and ‘Dragon’. There are actually two responses that could be deemed correct; however, participants are told to select one. The correct responses were ‘Cat’ as it is the only word beginning with the letter 'C' and ‘Dragon’ as it is the only creature with wings. Participants could not advance to the next section if there were any responses left blank. &#13;
All of the advertisements had the same consistent blue background, no fonts were used, and all objects had the same positioning between the simple and complex conditions. For example, A2 and B2 Dove both had the blue ribbon object included in the same position. All simple advertisements had one object; all complex advertisements had three objects to allow a comparison of the effect of complexity on consumers' explicit recall. &#13;
PROCEDURE &#13;
Participants were found and randomly allocated to one of the experimental groups. They were first presented with the participant information sheet (APPENDIX D) in which general information about the experiment was explained without revealing that it was the level of complexity being measured. Participants were also required to complete the consent form. (APPENDIX E) Thus ensuring the participant is aware that their data will be collected anonymously and that they have the right to withdraw at any time should they please. &#13;
Participants then viewed four advertisements for 30 seconds per advert. They were not able to advance to the next image until the timer ended . The counterbalancing of questionnaires meant that the adverts were viewed in random orders. The distraction task then engaged participants for a few minutes as they could not advance until the distraction tasks were complete. &#13;
Participants were then asked the free recall questions in which they are expected to list the brands they can remember and list the product attributes for said brands. The soap category had 26 points available for free recall, and the water category had 15 points available. This is due to more attributes generally being included on the packaging of the soap comparatively to a generic product like water. Ergo, a more comprehensive list of features was able to be asked. &#13;
Once the participant had submitted the free recall section, they moved onto the cued recall questions. This section provided prompts in the questions, for example, ‘name the products, if any, that were moisturizing?’ participants may not have been able to recall this attribute freely. Therefore, these questions had to be presented separately so as not to influence each other. Furthermore, the free recall had to be asked first for the same reason of not influencing responses. If participants had filled the cued responses first, this would invalidate any free recall questions which may have followed. The soap and water categories respectively had 16 points available for the cued recall questions. &#13;
Once the survey was completed, participants were shown the debrief sheet (APPENDIX F) in which the aim of the study was fully explained, and they were provided with details should they have any questions about their role and wish to discuss it further. &#13;
</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2875">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2876">
                <text>Data/SPSS.sav</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2877">
                <text>Proctor2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2878">
                <text>Lydia Brooks</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2879">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2880">
                <text>Field of visual complexity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2881">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2882">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2883">
                <text>LA1 4YW</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2884">
                <text>Sally Linkenauger </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2885">
                <text>MSC</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2886">
                <text>Cognitive, Perception; Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2887">
                <text>88</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2888">
                <text>ANOVA; T-Test</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="136" public="1" featured="0">
    <fileContainer>
      <file fileId="130">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d4dc1040e0bf719b8aac4376c7120bbf.pdf</src>
        <authentication>85e88c85cf74d6343dfa510d9a909980</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2849">
                <text>Optimising the Use of Synaesthetic Metaphors in Advertising: The Roles of Metaphor Construction and Complexity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2850">
                <text>Emily Davenport</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2851">
                <text>06/09/2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2852">
                <text>Metaphors are commonly employed in advertising to increase its persuasive effects. Research suggests that metaphors are most effective when conveyed visually, however linguists believe that additionally providing a linguistic cue, designed to help metaphor interpretation, can increase their effectiveness. In addition, metaphors of medium complexity are believed to drive higher effectiveness than simpler or more complex metaphors. This research aims to investigate how these issues relate to synaesthetic metaphors, those that reference two sensory modalities. Participants were presented with print adverts, the visual and linguistic elements of which were adapted to contain literal messages or synaesthetic metaphors. Participants provided ratings of appreciation, purchase intentions, and perceived advert complexity. Synaesthetic metaphors were shown to produce significantly stronger persuasive effects, measured via appreciation and purchase intentions, when conveyed visually and when rated highly on complexity. Implications for advertisers, who wish to incorporate and optimise the use of synaesthetic metaphors in print advertising, are discussed. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2853">
                <text>Metaphors; Synaesthetic Metaphors; Advertising; Persuasiveness</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2854">
                <text>Participants&#13;
This research recruited 122 participants via opportunistic sampling. Participants were native speakers of English aged 18 or over, with no history of disabilities in any of the sensory domains (sight, hearing, smell, taste and touch). Twelve participants were excluded due to incomplete survey responses and/or ineligibility according to the inclusion criteria, resulting in a sample of 110 participants (88 female, 20 male, 2 other; age: M = 38.11, SD = 18.60) who were randomly assigned to complete one of four surveys (see Design). The demographics per survey are detailed in Table 1. &#13;
&#13;
&#13;
Table 1&#13;
The Sample Size and Demographics Per Survey&#13;
	N	Gender	Age&#13;
		Male	Female	Other	Mean	SD&#13;
Survey 1	28	4	24	-	43.68	18.94&#13;
Survey 2	29	7	21	1	32.90	17.77&#13;
Survey 3	28	5	22	1	35.07	17.09&#13;
Survey 4	25	4	21	-	41.32	19.48&#13;
&#13;
&#13;
Materials &#13;
Advert Stimuli&#13;
The advert stimuli used in this research were gathered and modified by previous researchers at Francesca Citron’s laboratory (Chen, 2019; Pan, 2019). The researchers obtained real adverts containing synaesthetic metaphors from the dataset of Bolognesi and Strik Lievers (2018). These base adverts were labelled 1-8 (see Appendix A). The researchers produced three modified versions of each base advert. They edited the visual and linguistic elements, of product images and slogans respectively, to contain, or not contain, a synaesthetic metaphor,  in accordance with the ‘Metaphor Category’ they represented.&#13;
One version of each base advert conveyed a synaesthetic metaphor in both the visual and linguistic advert elements (Visual-Linguistic SM; labelled “VL”). One version contained a synaesthetic metaphor in the visual, but not linguistic, advert elements (Visual SM Only; labelled “V). One version contained a synaesthetic metaphor in the linguistic, but not visual, advert elements (Linguistic SM Only; labelled “L”). The final version served as a control as a synaesthetic metaphor did not appear in the visual or linguistic advert elements (No SM; labelled “N”). These metaphor categories are illustrated by the example of Advert 2 (see Figure 1). In 2VL, the image displays a lemon wearing a studded mask whilst the slogan writes “A PLEASINGLY SHARP TASTE”. This synaesthetic metaphor, conveyed by the image and slogan, attributes the lemonade as having a sharp taste, which references the sensory modalities of  touch (via “sharp” in the slogan, and the studded mask in the image) and taste (via “taste” in the slogan, and the lemon in the image). In 2V, the synaesthetic metaphor containing the image of 2VL is retained, however the slogan, “A PLEASINGLY SOUR TASTE”, no longer contains a synaesthetic metaphor since it a) is literal and b) only references one sense (via “sour taste”). In contrast, 2L retains the synaesthetic metaphor-containing slogan of 2VL (“A PLEASINGLY SHARP TASTE”) but contains a literal product image. The synaesthetic metaphor here therefore only appears in the linguistic advert elements. In 2N, the image of 2L and the slogan of 2V appear, meaning that a synaesthetic metaphor is not conveyed in either the visual or linguistic elements.&#13;
This process, of creating four versions per base advert, resulted in 32 advert stimuli. Within this, eight adverts, one per base advert, represented each metaphor category.  The advert stimuli were labelled according to their base advert number (1-8) and their metaphor category (VL; V; L; N). For example, 1VL presents the version of base advert 1 belonging to the visual-linguistic SM category. The full stimuli set can be viewed in Appendix A. The synaesthetic metaphors constructed in the stimuli, and the sensory domains referenced (see Table 2), are briefly explained in Appendix B. All adverts were written in English and printed in full colour.  &#13;
&#13;
Online Survey&#13;
	This research used a modified version of a Qualtrics (Provo, UT) survey produced by Chen (2019) and Pan (2019). The original survey featured 11 bipolar Likert scales per advert stimuli, all intended to contain 5-points but with some mistakenly containing 7-points. This was corrected in the present research, with all scales measured 0-5. The first four scales, measuring “Appreciation”, asked participants whether they liked the advert (Agree – Disagree) and whether they perceived it as “Bad”–“Good”; “Unpleasant”-“Pleasant”; and “Unappealing”-“Appealing”. The two following questions measured “Perceived Complexity” and concerned participants’ perception of the advert as “Unclear”–“Straightforward” and as “Difficult to Understand”– “Easy to Understand”. The next three questions measured “Purchase Intentions”. In the original survey, these focused on the purchase intentions of the respondent. This was modified in this research, following Pan (2019) and Chen’s (2019) finding that purchase intentions were merged with appreciation in PCA, and the belief that personal factors influence purchase intentions (Habich-Sobiegalla et al., 2019). The current survey instead asked respondents whether others would like to purchase the product, soon and in the future, and whether the advert would make others more likely to purchase the product (“Disagree”-“Agree”). On the final two questions, measuring “Perceived Realism”, participants rated the advert as “Unrealistic”–“Realistic” and “Fictitious”– “Real”. This question set was presented per advert stimulus, resulting in a total of 88 questions per survey.  &#13;
&#13;
Figure 1&#13;
The Four Versions of Advert 2&#13;
Table 2&#13;
The Sensory Domains Referenced by Each Advert, When Sensory Metaphors Were and Were Not Present &#13;
	Sensory Domains Referenced&#13;
	SM Present	No SM Present&#13;
	Source	Target	&#13;
Advert 1	Auditory	Taste 	Taste&#13;
Advert 2	Tactile	Taste 	Taste&#13;
Advert 3	Tactile	Taste	Taste&#13;
Advert 4	Visual	Auditory	Auditory&#13;
Advert 5	Visual	Auditory	Auditory&#13;
Advert 6	Visual	Smell	Smell&#13;
Advert 7	Auditory	Taste	Taste&#13;
Advert 8	Tactile	Taste	Taste&#13;
&#13;
Design&#13;
In an independent groups design, participants were randomly assigned to complete one of four online surveys. The independent variable was the metaphor category of each advert. Each survey presented eight adverts, one belonging to each of the eight base adverts and two belonging to each of the four metaphor categories. For example, Survey 1 presented two Visual-SM only adverts (Adverts 1 and 5), two Linguistic-SM Only adverts (Adverts 2 and 6), two Visual-Linguistic SM adverts (Adverts 3 and 7), and two No-SM adverts (Adverts 4 and 8), with one version of each base advert appearing only once. Table 3 lists the advert stimuli presented per survey. The four dependent variables, of ‘Appreciation’, ‘Purchase Intentions’, ‘Perceived Realism’ and ‘Perceived Complexity’, are further detailed in Materials and Variable Construction.&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
&#13;
Table 3&#13;
The Adverts Displayed per Survey, In Order of Appearance&#13;
Survey 1	Survey 2	Survey 3	Survey 4&#13;
1V	3N	5VL	7L&#13;
2L	4V	6N	8VL&#13;
3VL	5L	7V	1N&#13;
4N	6VL	8L	2V&#13;
5V	7N	1VL	3L&#13;
6L	8V	2N	4VL&#13;
7VL	1L	3V	5N&#13;
8N	2VL	4L	6V&#13;
&#13;
&#13;
Procedure&#13;
The entirety of this study was completed on Qualtrics (Provo, UT). Participants were informed of the researchers' background and requirements, and briefed of their anonymity, confidentiality and right to withdraw (Appendix C), before providing informed consent (Appendix D). Participants declared their age and gender and confirmed that English was their native language and that they did not suffer from any sensory inabilities. Participants viewed each of the eight adverts in turn and answered 11 five-point Bipolar Likert scales per advert (see Materials, Survey). Finally, participants were debriefed, reminded of their terms of participation, and provided with further reading (Appendix E). The study took 10 minutes to complete.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2855">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2856">
                <text>Data/Excel.xlsx</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2857">
                <text>Davenport2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2858">
                <text>Malcolm Wong</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2859">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2860">
                <text>Follow up on previous research in Francesca Citron's lab</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2861">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2862">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2863">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2864">
                <text>Francesca Citron</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2865">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2866">
                <text>Marketing</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2867">
                <text>122, but 12 excluded so final sample of 110.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2868">
                <text>ANCOVA, ANOVA, Regression, and T-Test.</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="135" public="1" featured="0">
    <fileContainer>
      <file fileId="129">
        <src>https://www.johnntowse.com/LUSTRE/files/original/d5d66fddce33099653308110f6ceed40.docx</src>
        <authentication>a0c824eb4e49b092117cfb8fce8ce753</authentication>
      </file>
    </fileContainer>
    <collection collectionId="5">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="185">
                  <text>Questionnaire-based study</text>
                </elementText>
              </elementTextContainer>
            </element>
            <element elementId="41">
              <name>Description</name>
              <description>An account of the resource</description>
              <elementTextContainer>
                <elementText elementTextId="186">
                  <text>An analysis of self-report data from the administration of questionnaires(s)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="14">
      <name>Dataset</name>
      <description>Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2829">
                <text>Extending the Cortical Hyperexcitability Index</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2830">
                <text>Haydn Farrelly</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2831">
                <text>27/05/2022</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2832">
                <text>Anomalous perceptual experiences are associated with underlying excitation of neural activity in the cerebral cortex, known as cortical hyperexcitability (Wilkins, 1995). This can be measured behaviourally by the pattern glare test, where migraineurs consistently show greater susceptibility to anomalous visual percepts in response to grating patterns than control participants (for review see Evans &amp; Stevenson, 2008). Based on these findings, Fong, Takahashi and Braithwaite (2019) developed a screening measure of visual cortical hyperexcitability, the Cortical Hyperexcitability Index (CHi-II), through exploratory factor analysis. This project aims to create auditory-based items for the CHi-II. We know cortical hyperexcitability in the auditory cortex is also associated with a number of auditory symptoms in migraine such as heightened auditory sensitivity and a range of anomalous auditory percepts, ranging from tinnitus-like tones to multiple conversing voices (Vingen, Pareja &amp; Støren et al., 1998; Miller, Grosberg, Crystal &amp; Robbins, 2015). As such we created seven auditory items through adaptation of related questionnaire items and generating unique items based on phenomenology of patient descriptions; these refer to experiences of hearing voices or unexplained sounds under various circumstances, as well as sensitivity to noise. Exploratory Factor Analysis will be conducted on the CHi-II alongside auditory items to test which factor each item best loads onto, as well as using Cronbach's Alpha to assess internal validity. Results are discussed in terms of the debate on global versus localised effects of patterns of hyperexcitability, as well as implications for our understanding of multisensory anomalous perceptual experiences.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2833">
                <text>Perceptual Aberrations, Cortical Hyperexcitability, Migraine, Aura, Tinnitus, Auditory Perception, Visual Perception, Hallucinations</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="2834">
                <text>Lancaster University</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2835">
                <text>Data/Excel.csv</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="43">
            <name>Identifier</name>
            <description>An unambiguous reference to the resource within a given context</description>
            <elementTextContainer>
              <elementText elementTextId="2836">
                <text>Farrelly2021</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2837">
                <text>Haydn Farrelly</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="47">
            <name>Rights</name>
            <description>Information about rights held in and over the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2838">
                <text>Open</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="46">
            <name>Relation</name>
            <description>A related resource</description>
            <elementTextContainer>
              <elementText elementTextId="2839">
                <text>Braithwaite, Marchant, Takahashi, Dewe &amp; Watson (2015)&#13;
Fong, Takahashi &amp; Braithwaite (2019)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2840">
                <text>English</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="2841">
                <text>Data</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="38">
            <name>Coverage</name>
            <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
            <elementTextContainer>
              <elementText elementTextId="2842">
                <text>LA1 4YF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="2848">
                <text>Method &#13;
&#13;
Participants &#13;
&#13;
Forty-five participants age 18-24 (M = 19.24) took part either for research credits or without incentive. Of these, thirty-seven (82%) were female and thirty-seven (82%) were right-handed. Prior to the main questionnaire, a pre-screening survey asked participants to declare any history of neurosurgeries (8.22%), neurological conditions (2.22%), psychological conditions (17.78%), ocular conditions (15.56%), epilepsy (0%), migraine (24.44%), or tinnitus (15.56%). &#13;
&#13;
 &#13;
&#13;
Auditory Item Creation &#13;
&#13;
As with the original CHi-II, items were based on previous questionnaires measuring anomalous perceptual experiences (Sierra &amp; Berrios, 2000; Bell, Halligan &amp; Ellis, 2006) alongside patient reports of auditory experiences in migraine (Miller, Grosberg, Crystal &amp; Robbins, 2015; Vreeburg, Leijten, Sommer &amp; Sommer, 2016). These items were split into two categories: voice-hearing, and noise-hearing. We distinguished between hearing a single voice in item one ‘Do you ever hear a single voice talking aloud in your head without a clear source?’, or multiple voices in item two ‘Do you ever hear 2 or more unexplained voices talking with each other?’, as these are delineated in patient reports (Miller et al., 2015; Vreebrug et al., 2016). We also distinguish between hearing instructing voices in item three ‘Do you ever hear voices telling you what to do?’, and hearing voices which comment on thoughts and actions in item four ‘Do you ever hear voices telling you what to do, or commenting on what you are thinking or doing?’, as suggested by the CAPS and CDS (Sierra &amp; Berrios, 2000; Bell et al., 2006). The first noise item asked participants about the occurrence of anomalous sounds in item five ‘Do you ever notice sounds, such as ringing / buzzing , which other people around you cannot hear?’ as recommended by CAPS and CDS (Sierra &amp; Berrios, 2000; Bell et al., 2006). The final noise items referred to volume of sounds in item six ‘Do you ever become annoyed or agitated by sounds that are too loud or uncomfortable for you?’, and distraction caused by sounds in item seven ‘Do you ever become distracted when surrounded by lots of noise?’ as these are common auditory complaints of migraine sufferers (Miller, Grosberg, Crystal &amp; Robbins, 2015; Vreeburg, Leijten, Sommer &amp; Sommer, 2016). As with the original CHi-II, participants respond to items in terms of their frequency on a zero (‘Never’) to six (‘All the time’) Likert scale, and their intensity on a zero (‘Not at all’) to six (‘Extremely intense’) Likert scale. Scores from these two scales are added to create a total score for each item. Informed consent was obtained from all participants. &#13;
&#13;
 &#13;
&#13;
Analysis &#13;
&#13;
Total scores were collected from both the original CHi-II questionnaire (Braithwaite, Marchant &amp; Takahashi et al., 2015; Fong, Takahashi &amp; Braithwaite, 2019) and these additional auditory items to complete an EFA. Parallel analysis was also applied to statistically verify the loadings of the new items onto the underlying factor structure (Horn, 1965; Hayton, Allen &amp; Scarpello, 2004). Cronbach’s alpha was used to test the internal consistency of each factor. </text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
      <elementSet elementSetId="4">
        <name>LUSTRE</name>
        <description>Adds LUSTRE specific project information</description>
        <elementContainer>
          <element elementId="52">
            <name>Supervisor</name>
            <description>Name of the project supervisor</description>
            <elementTextContainer>
              <elementText elementTextId="2843">
                <text>Dr. Jason Braithwaite</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="53">
            <name>Project Level</name>
            <description>Project levels should be entered as UG or MSC</description>
            <elementTextContainer>
              <elementText elementTextId="2844">
                <text>MSc</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="54">
            <name>Topic</name>
            <description>Should contain the sub-category of Psychology the project falls under</description>
            <elementTextContainer>
              <elementText elementTextId="2845">
                <text>Neuroscience</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="56">
            <name>Sample Size</name>
            <description/>
            <elementTextContainer>
              <elementText elementTextId="2846">
                <text>45</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="55">
            <name>Statistical Analysis Type</name>
            <description>The type of statistical analysis used in the project</description>
            <elementTextContainer>
              <elementText elementTextId="2847">
                <text>Factor Analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
