["itemContainer",{"xmlns:xsi":"http://www.w3.org/2001/XMLSchema-instance","xsi:schemaLocation":"http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd","uri":"https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-json&page=10&sort_field=added","accessDate":"2026-05-03T09:38:11+00:00"},["miscellaneousContainer",["pagination",["pageNumber","10"],["perPage","10"],["totalResults","148"]]],["item",{"itemId":"137","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"131"},["src","https://www.johnntowse.com/LUSTRE/files/original/479c9a1888cc1f0fda97893b220919cd.doc"],["authentication","666af35ed0df5544aff385f320bf5c81"]]],["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2869"},["text","Exporing the Effect of Visual Complexity on Recall"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2870"},["text","Hayleigh Proctor "]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2871"},["text","08/09/2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2872"},["text","This study was conducted to explore the effect of visual complexity on an individuals` recall of product brands and their attributes in either simple or complex adverts . Within the field of visual complexity, there has been contradiction as to whether complexity helps or hinders recall, this study aims to resolve this question. A survey was conducted to measure their free and cued recall for adverts that varied in their visual complexity. The complex advertisements were defined as having three objects included whilst the simple advertisements had only one object included. This was decided to align with the industry standard for defining visual complexity as set by Attneave (1954), Snodgrass & Vanderwart (1980) and Chikhman et al., (2012). A percentage scoring system was used to compare overall memory performance. The data showed that those in the simple condition performed better compared to those in the complex condition. However, this was not the case for every individual. The results found the effects of complexity to be marginally significant (p < 0.09); however, the study had limited power, and a replication with a larger population could provide a more complete picture of the influence of the independent variable. Whilst this study does not provide a definitive conclusion towards the effect of visual complexity, it does explore and provide an insight into the effects of complexity on recall of product attributes in advertisements. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2873"},["text","#visualcomplexity #recall #free-recall #cued-recall #advertisements #simple #complex"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2874"},["text","PARTICIPANTS \r\nThe larger the number of participants in a study, the better-protected results will be from extraneous variables. For this reason, the participants were collected through random snowball sampling (Emerson, 2015). Each condition had 22 participants, a minimum age of 16 being the only participation condition. The participants were randomly allocated to each one of the four experimental conditions, providing 88 total participants (N= 88). There were no gender requirements for participation (Females (N = 47), Males (N = 31), Other (N = 4)). \r\nThe majority of participants were born in the U.K. (N = 46) or Poland (N = 35). The majority are currently residing in England (N = 57) or Poland (N = 21), but responses were still collected from further afield, such as France and the U.S.A. (N = 10). The majority of participants fell into the two youngest age categories, 16 to 18-year-olds (N = 22) and 22 to 27-year-olds (N = 37). \r\nGeneral demographic information provided insight into the advertisement exposure in participants' generic routines. The majority of participants were native English speakers (N = 49). The majority of participants use streaming services (N = 76), of which just under half of the respondents said their service had adverts (N = 38). Participants also use ad blockers (N = 49). Just over a quarter of participants use cable T.V. (N = 27). When asked whether they pay for premium applications, the majority said ‘never’ (N = 60), occasionally (N = 16), sometimes (N = 9), usually (N = 2), whilst only one participant always pays for premium applications (N = 1). \r\nMATERIALS \r\nFirstly, two product categories were chosen, bottled water and soap bars, four brands were then selected per category (see table 1). There were 16 advertisements in total, eight for the simple and complex conditions, respectively. (APPENDIX A) The editing software Gimp was used to design the advertisements to enable the selected products to be presented in the controlled advert setting. This 'controlled setting' ensured that the backgrounds were consistent across the adverts, e.g., they all used the same blue background. Additionally, no text or fonts were added, and the objects included had the same position as their counterparts. There were two experimental groups wherein participants were presented the advertisements. Within those two groups participants would view one of the product categories e.g., the water products. To account for confounding variables advertisements were counterbalanced, randomizing their order of appearance. Participants only saw one product category (e.g., soap or water) and one variation of the advert e.g., if they saw the simple A1 Aveeno advert, they were not be presented with the complex B1 Aveeno advert. If participants saw the complex B5 Buxton advert, they were not presented with the simple B1 Buxton version. If participants saw the soap adverts, they did not see the water and vice versa.\r\nThe web-based software Qualtrics was used to create the surveys (APPENDIX B) and a generalized report of the results. After extracting the data, SPSS was used to dummy code and manipulate the data to measure the effect of visual complexity on recall. \r\nDESIGN \r\nThis experiment used a between-group design wherein participants were allocated either the simple or complex condition to examine which level of complexity had the larger effect (Turkeltaub et al., 2011). The type of complexity, simple or complex, is the independent variable of the experiment. The dependent variable is the effect this has on participants' recall (Atinc et al., 2011). In this project, simple advertisements are defined by having only one object included in the background, whereas complex advertisements are defined by having three objects. \r\nParticipants were first asked questions pertaining to free recall of product attributes before then being presented with the cued recall questions. This was to allow a distinction between non prompted (free) and prompted (cued) responses, enabling me to mark each survey and allocate a combined percentage recall score to each participant. \r\nTo control for confounding variables, the surveys were counterbalanced. Participants were shown the adverts randomly within each experimental group so that I could isolate the sequence effects that participants are exposed to. However, I could not control for extraneous variables such as the time of day participants completed the survey, their emotional state, or their level of intelligence. Additionally, situational factors such as the location they were in, e.g., whether the room they were in was too loud, too hot, too cold, could not be accounted for. \r\nTo prevent participants from rehearsing the material, distraction tasks were provided before requesting question responses (APPENDIX C). These were designed to be cognitively engaging by requiring participants to read sections of text and 'fill in' the missing words and select the 'odd word out' in a listing task. When completing these tasks, participants would not necessarily be aware that they were not an essential part of the study and thus, in processing their responses, would have to pause. For example, 'which word does not belong with the others?' had the response options of ‘Dog’, ‘Cat’, ‘Donkey’, and ‘Dragon’. There are actually two responses that could be deemed correct; however, participants are told to select one. The correct responses were ‘Cat’ as it is the only word beginning with the letter 'C' and ‘Dragon’ as it is the only creature with wings. Participants could not advance to the next section if there were any responses left blank. \r\nAll of the advertisements had the same consistent blue background, no fonts were used, and all objects had the same positioning between the simple and complex conditions. For example, A2 and B2 Dove both had the blue ribbon object included in the same position. All simple advertisements had one object; all complex advertisements had three objects to allow a comparison of the effect of complexity on consumers' explicit recall. \r\nPROCEDURE \r\nParticipants were found and randomly allocated to one of the experimental groups. They were first presented with the participant information sheet (APPENDIX D) in which general information about the experiment was explained without revealing that it was the level of complexity being measured. Participants were also required to complete the consent form. (APPENDIX E) Thus ensuring the participant is aware that their data will be collected anonymously and that they have the right to withdraw at any time should they please. \r\nParticipants then viewed four advertisements for 30 seconds per advert. They were not able to advance to the next image until the timer ended . The counterbalancing of questionnaires meant that the adverts were viewed in random orders. The distraction task then engaged participants for a few minutes as they could not advance until the distraction tasks were complete. \r\nParticipants were then asked the free recall questions in which they are expected to list the brands they can remember and list the product attributes for said brands. The soap category had 26 points available for free recall, and the water category had 15 points available. This is due to more attributes generally being included on the packaging of the soap comparatively to a generic product like water. Ergo, a more comprehensive list of features was able to be asked. \r\nOnce the participant had submitted the free recall section, they moved onto the cued recall questions. This section provided prompts in the questions, for example, ‘name the products, if any, that were moisturizing?’ participants may not have been able to recall this attribute freely. Therefore, these questions had to be presented separately so as not to influence each other. Furthermore, the free recall had to be asked first for the same reason of not influencing responses. If participants had filled the cued responses first, this would invalidate any free recall questions which may have followed. The soap and water categories respectively had 16 points available for the cued recall questions. \r\nOnce the survey was completed, participants were shown the debrief sheet (APPENDIX F) in which the aim of the study was fully explained, and they were provided with details should they have any questions about their role and wish to discuss it further. \r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2875"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2876"},["text","Data/SPSS.sav"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2877"},["text","Proctor2021"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2878"},["text","Lydia Brooks"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2879"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2880"},["text","Field of visual complexity"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2881"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2882"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2883"},["text","LA1 4YW"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2884"},["text","Sally Linkenauger "]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2885"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2886"},["text","Cognitive, Perception; Marketing"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2887"},["text","88"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2888"},["text","ANOVA; T-Test"]]]]]]]],["item",{"itemId":"138","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"132"},["src","https://www.johnntowse.com/LUSTRE/files/original/a339e171ed4f4ad6da75e1f93c80db7c.pdf"],["authentication","74c6799c7cc96af439fc872b4f1cc5f2"]]],["collection",{"collectionId":"10"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"819"},["text","Interviews"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2889"},["text","Understanding the psychological, perceptual and emotional impact signage has on residents in a local community. "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2890"},["text","Alexander Wootton"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2891"},["text","15/09/2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2892"},["text","The placement of signage, street furniture and advertisements can have a profound impact on the appearance of a built environment. They play a vital role in shaping the cultural, physical and social identities that impact the perceptions that residents and other stakeholders hold towards local communities, which in turn impacts on behaviours. Adopting a qualitative approach, this study will examine the impact of signage and other visual features that can contribute to the psychological, perceptual and emotional impact that these elements can have on residents in a local community. A number of semi-structured interviews were conducted amongst residents in One Manchester property areas, One Manchester place officers and residents near these areas. Participants were shown a variety of visual images of signage and were prompted to discuss their emotional response and thoughts, and propose suggestions to improve signage. A thematic analysis was conducted using the interview data and indicated the following four themes: signage design, reputation, community engagement and impact of signage. Reflecting upon these themes, the results suggested that existing signage was psychically ill-fitted and visually dull, lacking positive influential stimuli and evocative colours and that it lacked the authenticity and character needed to emotionally resonate with passers-by. This negatively impacted the reputation of the communities, leading them to be categorised as economically poor with high crime rates, resulting in stakeholders feeling alienated and some fearful. The results highlighted that the signage needs to be revitalised as a part of a wider placemaking strategy to rejuvenate local environments, perceived to be run down. This should support the ongoing evolution of these areas and engage community members to instal signage that is both influential and reflects an overall collective vision.  \r\n\r\n"]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2893"},["text","signage, placemaking, community engagement, qualitative research, community reputation\r\n"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2894"},["text","Design\r\nDue to the need to gain an in-depth understanding of the psychological, perceptual, and emotional impact signage has on residents in a community and factoring in the Covid-19 pandemic, a qualitative approach was adopted consisting of semi-structured interviews. This style of interviews was considered the most suitable method as they provide rich data on the participant’s thoughts which are not constrained by the bounds of tick box exercises or strict discussion guides. They enable researchers to “assess, confirm, validate, refute, or elaborate upon existing knowledge and the discovery of new knowledge” (Mcintosh & Morse, 2015, p. 1). This enables the discussion between the moderator and participant to flow more smoothly and naturally (Roulston et al., 2003) yet, a flexible guide at the moderators disposal keeps the conversation on topic. Interviews in the project were conducted using Microsoft Teams and telephone communication. The data was then assessed using Braun and Clarke’s (2006) six step thematic analysis.\r\nBraun & Clarke’s (2006) six-steps thematic analysis: \r\nFamiliarisation: Getting to know the overall data collected through re-reads of transcripts. \r\nCoding: Reducing sentences and phrases into small fragments of meaning or “codes”.  \r\nGenerating themes: Identifying patterns among codes. \r\nReview themes: Assuring that the meanings identified are relevant to the representation of data collected (research objectives). \r\nDefine themes: Refine themes developed by establishing their essence and significance. \r\nAnalysing themes: Highlight the frequency of themes and meanings derived from qualitative data analysis. Generate conclusions agreed-upon by all researchers.\r\n\r\nParticipants\r\nA sample of 24 participants was originally agreed, however, only 14 participants were interviewed for the project. Participants were either recruited by One Manchester or the lead researcher from areas across south, east and central Manchester. Participants were made up of the following:\r\n\r\nEight One Manchester residents \r\nThree One Manchester Place Coordinators who worked in specific patch areas\r\nThree Local residents living in areas where One Manchester own property \r\n\r\nThe lead researcher conducted site visits around areas of Manchester, this was done so the lead researcher could physically inspect communities to identify signage which were used to aid the discussion guide. The sites visits were conducted in Rusholme, Openshawe and Clayton. \r\n\r\nVisiting these locations first to view all the signage, symbols and other visual features was invaluable both to generating stimulus material for the interviews and the discussion guides. The aim of the sample was to gain a diverse range of viewpoints from a variety of demographics across Manchester to generate a rich data. Participants were recruited from: Clayton, Droysden, Fallowfield, Gorton, Hulme, Openshawe, Rusholme and Whalley range. A £20 shopping voucher was put forward to incentivise participation in the study. \r\n\r\n\r\nMatierials \r\nInterview guide \r\n\r\nTo obtain the most effective feedback from participants, a discussion guide was created, which provided a structured framework to guide discussions (See Appendix A, see Appendix B for discussed images). When formatting the discussion guide, the lead researcher took into consideration current literature on signage and sought to examine resident’s attitudes, perceptions and behaviours in connection to signage in their local community. \r\n\r\nThe discussion guide was composed of four sections:\r\nSection 1:  Was a general introduction to the subject area and participants’ current awareness of signage and other visuals in their area.\r\nSection 2: Heavily focussed on signage and other visuals gathered from site visits  In all of the interviews, participants were shown the images in the order reflected in Appendix B, and they will be asked the same set of questions in relation to each image in order to generate an in-depth discussion on such images. One Manchester and the lead researcher agreed participants would not be informed figures 1-4 were the perceived negative images and figures 5-8 were the perceived positive images.\r\nSection 3: Focused on the future trajectory for signage and symbols. Participants were asked how their perceptions would be impacted if any of the discussed signage was placed in their areas now and in the future. Following this, participants were invited to share any recommendations into the designs of signage.\r\nSection 4: This was only for One Manchester residents. They were asked questions about One Manchester’s performance and potential future actions with their communities. The section was designed to give residents an active voice in how One Manchester can strengthen their relations with residents and enact positive change to protect the future of local communities.\r\n\r\nEach question in the discussion guide was designed to be open-ended, to allow participants to have a wider scope and openly share their opinions. The guide was configured to offer flexibility to discuss topics, therefore when required the lead researcher altered the order and wording of questions to maintain the natural flow of discussion with participants.\r\n\r\nProcedure\r\n\r\nInterviews were carried out between June and August 2021. Participants were requested to share their opinions around a variety of topics concerning how signage in local communities impact a resident psychological, perceptual and emotionally. Before embarking with interviews, participants were provided an information sheet outlining the study procedure, purpose, confidentiality and their right to withdraw at any time of the study’s duration. If participants accepted the conditions to being interviewed and part of the project, a time was then arranged to administer the interview at the convenience of the participant. Nine of the interviews were overseen through Microsoft Teams, the remaining five were facilitated by telephone at the request of the participants. Before proceeding with the interview, the lead researcher pointed out again the aims of the project and received verbal permission to go ahead with the discussion. Interviews were expedited using the discussion guide to ensure interviews remained structured whilst probing concepts tied to the research question. Attention was devoted to each interview to give participants adequate flexibility to discuss matters significant to them not included in the discussion guide. When required, to guarantee ample depth, follow-up questions and prompts were employed to stimulate participants to delve deeper on essential and intriguing answers (DeJonckheere & Vaughn, 2019). Field notes were developed during discussion, underlining both relevant and vital points, which enabled the researcher to refer to any major points and subsequently, assist them with data analysis (Rapley, 2004). As soon as all the questions had been completed, participants were promptly asked to share any other matters they deemed crucial. If participants were then satisfied with the feedback provided, the moderator would end the interview, and debrief participants about the study which was sent electronically. Discussions typically ranged between 30 minutes – 1 hour which were then all transcribed.\r\n\r\nAnalysis \r\n\r\nAs previously mentioned, Braun and Clarke’s (2006) six step thematic analysis was used to detect themes and patterns underpinning residents’ psychological perceptions, attitudes and behaviours towards signage in local communities. To support Braun and Clarke’s (2006) thematic analysis, a bottom-up analysis was utilised due to the project’s exploratory nature and this facilitates identification of themes that arise from consistent patterns within the data set. Firstly, after each interview was completed, the researcher instantly made notes of the key concepts and beliefs and then transcribed the discussion. To guarantee preciseness of the transcript and the lead researchers’ familiarity with the data content, audio recordings and transcripts were reviewed several times. Subsequently, the process to create codes began, the lead researcher analysed the data set and identified key extracts from the data on the basis of their significance and relevance which led to the creation of the codes. Thereafter, provisional themes were produced through a thorough examination of the coded data set, when shared patterns were discovered and judged to be similar or unified under a core notion. All codes were integrated into a central theme. From this, the provisional themes then were revised and reviewed to ensure the themes had remained articulated and unique. During this period, the coded excerpts linked to a core theme was re-examined to verify it could reinforce the central theme and they featured no inconsistencies with that theme (Braun and Clarke, 2006). By which time, a number of themes were either excluded or merged due the lack of sufficient data to uphold the theme. The procedure was repeated several times to consolidate relevancy of the themes to the research question whilst rigorously ensuring they mirrored the patterns found in the data set (Braun and Clarke, 2006). Ultimately, the final themes had been selected and a meticulous account of each theme was supplied. Once the thematical analysis process had been completed, extracts from the content were chosen to illustrate and support the relevant themes in the report \r\n\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2895"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2896"},["text","Word doc"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2897"},["text","Wooton2022"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2898"},["text","Joel Fox"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2899"},["text","open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2900"},["text","Consultancy - Commercial report"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2901"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2902"},["text","Data"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2903"},["text","Leslie Hallam"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2904"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2905"},["text","Psychology of Advertising"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2906"},["text","14"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2907"},["text","Qualitative (thematic analysis)"]]]]]]]],["item",{"itemId":"139","public":"1","featured":"0"},["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2908"},["text","The impact of retribution on perception of transgressor by others "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2909"},["text","Olivia Wilson "]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2910"},["text","2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2911"},["text","Emotions play a key role in within society, behaviour and human life with moral emotions such as guilt, regret and shame being able to influence individuals’ judgments and actions. For example, a person who experiences guilt will want to fix their wrongdoing that has caused this. There are times where these efforts to repair ones transgression, can lead an individual to self-punish in order to repair bonds with others and reduce negative consequences of the situation. The present study experimentally investigated the effect of self-punishment intensity on perceptions of a transgressor. Participants were randomly assigned to one of three conditions of self-punishment intensity (low, correct and high). Vignettes were manipulated for each condition and presented for participants to read for them to answer questions on their judgments of the transgressor (perceptions of guilt, shame, regret, moral character, and trustworthiness, their willingness to forgive the transgressor, how likely they thought they would reoffend in the future) and rated this on a Likert scale of 0-5. Participants allocated to low self-punishment had more negative perceptions towards the transgressor overall when compared to correct self-punishment. However, this was not found beyond this as no differences were seen for those within the high self-punishment condition "]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2912"},["text","Participants. Participants were recruited through the use of LU Sona system as well as opportunity sampling through use of social media and network platforms accessible. A total of 174 responses were collected via Qualtrics, of those 158 have been successfully completed through to the end whilst 16 have only been started and answered few questions at most. Therefore, the decision has been made to exclude any incomplete attempts. This resulted in a final sample of 158 of which 54 are in the high punishment condition, 52 in low punishment condition and 52 in correct punishment. \r\nDesign. This is a one-factor study with 3 levels (self-punishment: Low punishment, correct punishment, and high punishment) between-subjects design. Qualtrics randomly allocated participants to one of the three conditions. \r\nMaterials. A short hypothetical vignette was used to describe an event between two individuals; ‘Simon’ the transgressor and his friend, who he steals money from. With each of the punishment conditions, the vignette introduced the scenario with the same starting sentences to create the scene of someone performing a transgression against their friend with feelings of self-directed negative affect presented by the transgressor: \r\nSimon is out with his friends when he noticed that a member of his group has left their wallet unattended. Simon helps himself to the £40 that was in the wallet. His friend eventually realises that the money has been stolen and seems distressed. The next day, Simon feels bad for his actions and confesses to his friend that he took the money. \r\nThe final sentence of the vignettes was manipulated for each of the three conditions. The sentence stated the amount of money returned to Simon’s friend, which was either less than originally taken (low punishment, £20), same amount (correct punishment, £40) or more than originally taken (high punishment, £60). \r\nHe gives his friend all the money he has in his wallet, which came to £20 (or £40, or \r\n£60). \r\nHypothetical vignettes have been a popular method to explore social actions within research allowing actions to be explored in context to specific situations, people’s judgments, reactions and perceptions of the scenario being described and/or the individual people within the vignette. It allows this all to be clarified in the form of data collection and provides a less personal, and therefore less threatening way of exploring sensitive issues and topics in society (Barter & Renold, 1999; Hughs, 1998; Schoenberg & Ravdal, 2000). Vignettes are a valuable technique for exploring perceptions of situations and have been utilised previously in research on guilt and perceptions of a transgressor post-transgression (McLatchie, 2019; Manstead & Semin, 1981; Dijk, de Jong & Peters, 2009) and so have been utilised in this research of intensity of self-punishment post-transgression. \r\nEmpirical research has shown that emotions and perceptions of guilt specifically focuses attention on the behaviour and action that has occurred which has in turn elicited these feelings (Tangney & Dearing, 2002). This is why the vignette in the present study was written with a particular emphasis on presenting the transgressor to be feeling remorse/guilt after failing to adhere to a social standard, being explicitly stated through acceptance of responsibility. This was done through stating that Simon ‘felt bad for his actions’, intentionally presenting to participants that, regardless of the punishment, Simon did know his behaviour was wrong. It can also be seen in this study through the motivations and efforts to recompensate the wrongdoing through his self-punishment and returning of a quantity of money. Absence of this could imply to participants a lack of emotional response, this could have impacted judgments on Simon regardless of the presence of punishment or not. \r\nAs stated previously, other emotions can be used synonymously within conversation when referring to guilt, such as self-conscious emotions like regret and shame; it was important to ensure that guilt was specifically being portrayed. McLatchie (2019) ensured this in his study investigating punishment types (no punishment, self-punishment, and other punishment). McLatchie used a vignette that described interpersonal violations as these are primarily associated with guilt than the other emotions. This is because it includes other individuals and not merely directed at the self where the common emotion that would most likely be triggered would be shame instead. Due to this, the present study also used a vignette that described an interpersonal violation of moral and social standards with the last sentence manipulated to present three self-punishment conditions based on varying intensities. These terms are popularly used interchangeably within conversation due to multiple similarities between them (Shen, 2018; Bhushan, Basu & Dutta; 2020; Stearns & Parrott, 2012), \r\nParticipants were then asked a series of questions which gathered information on the participants judgments of Simon. Participants were asked to rate the extent of the perceived guilt, shame, and regret of the transgressor as a third-party observer which keeps in line with current research which provides evidence for a strong internal consistency of these measures (McLatchie, 2019). It is also consistent with previous research where the same elements were combined to calculate an overall guilt score. This emphasised the importance of these emotional responses and behaviours that an individual may present when judging overall guilt being experienced by the perpetrator. How much the participant thinks Simon (the transgressor) deserves to be forgiven was also measured. This was done with an adapted version of Zhu et al.’s (2017) way of measuring this and has proved to be effective in prior research related to guilt and self-punishment (McLatchie, 2019). The final questions were – how likely the participants thought Simon would reoffend, and to what extent they thought the punishment performed was sufficient for the transgression committed. All answers were presented and rated on a Likert scale with the question above. \r\nProcedure. Participants were invited to partake in a study aiming to evaluate a ‘social action’. Qualtrics was used to provide the survey to participants where they were asked to read through the vignette prior to moving through the questions and answers which measured their responses. As each question appeared, the vignette remaining at the top of the screen for reference throughout. Answers were presented on a 6-point Likert scale ranging from 0 (“Not at all”) to 5 (“Completely”) which they were required to choose their response through a rating. \r\nOnce participants completed this survey, a final section asked participants to provide demographic information with a full debrief. Demographic information included basic information such as the participants age and gender. Additional questions were included in order to gain an insight into the participants experience with situations such as the one described in the vignette and their personal experiences with guilt allowing any influences of the participants character to be seen when analysing results. These include being asked if they have ever had an experience as the protagonist (Simon in this case), someone who has been stolen from, and if they are prone to feelings of guilt. \r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2913"},["text","Lancaster University "]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2914"},["text","Data R AStudio .csv"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2915"},["text","Wilson2021"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2916"},["text","Anastasija Jumatova"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2917"},["text","Open (unless stated otherwise)"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2918"},["text","None (unless stated otherwise)"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2919"},["text","English "]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2920"},["text","Data and Text "]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2921"},["text","Tamara Rakic"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2922"},["text","Masters"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2923"},["text","Social "]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2924"},["text","158 participants ( 54 are in the high punishment condition, 52 in low punishment condition and 52 in correct punishment)."]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2925"},["text","Quantitative "]]]]]]]],["item",{"itemId":"142","public":"1","featured":"0"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2936"},["text","Optimising the Use of Synaesthetic Metaphors in Advertising: The Roles of Metaphor Construction and Complexity"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2937"},["text","Emily Davenport"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2938"},["text","06/09/2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2939"},["text","Metaphors are commonly employed in advertising to increase its persuasive effects. Research suggests that metaphors are most effective when conveyed visually, however linguists believe that additionally providing a linguistic cue, designed to help metaphor interpretation, can increase their effectiveness. In addition, metaphors of medium complexity are believed to drive higher effectiveness than simpler or more complex metaphors. This research aims to investigate how these issues relate to synaesthetic metaphors, those that reference two sensory modalities. Participants were presented with print adverts, the visual and linguistic elements of which were adapted to contain literal messages or synaesthetic metaphors. Participants provided ratings of appreciation, purchase intentions, and perceived advert complexity. Synaesthetic metaphors were shown to produce significantly stronger persuasive effects, measured via appreciation and purchase intentions, when conveyed visually and when rated highly on complexity. Implications for advertisers, who wish to incorporate and optimise the use of synaesthetic metaphors in print advertising, are discussed. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2940"},["text","Metaphors; Synaesthetic Metaphors; Advertising; Persuasiveness"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2941"},["text","Participants\r\nThis research recruited 122 participants via opportunistic sampling. Participants were native speakers of English aged 18 or over, with no history of disabilities in any of the sensory domains (sight, hearing, smell, taste and touch). Twelve participants were excluded due to incomplete survey responses and/or ineligibility according to the inclusion criteria, resulting in a sample of 110 participants (88 female, 20 male, 2 other; age: M = 38.11, SD = 18.60) who were randomly assigned to complete one of four surveys (see Design). The demographics per survey are detailed in Table 1. \r\n\r\n\r\nTable 1\r\nThe Sample Size and Demographics Per Survey\r\nN Gender Age\r\nMale Female Other Mean SD\r\nSurvey 1 28 4 24 - 43.68 18.94\r\nSurvey 2 29 7 21 1 32.90 17.77\r\nSurvey 3 28 5 22 1 35.07 17.09\r\nSurvey 4 25 4 21 - 41.32 19.48\r\n\r\n\r\nMaterials \r\nAdvert Stimuli\r\nThe advert stimuli used in this research were gathered and modified by previous researchers at Francesca Citron’s laboratory (Chen, 2019; Pan, 2019). The researchers obtained real adverts containing synaesthetic metaphors from the dataset of Bolognesi and Strik Lievers (2018). These base adverts were labelled 1-8 (see Appendix A). The researchers produced three modified versions of each base advert. They edited the visual and linguistic elements, of product images and slogans respectively, to contain, or not contain, a synaesthetic metaphor, in accordance with the ‘Metaphor Category’ they represented.\r\nOne version of each base advert conveyed a synaesthetic metaphor in both the visual and linguistic advert elements (Visual-Linguistic SM; labelled “VL”). One version contained a synaesthetic metaphor in the visual, but not linguistic, advert elements (Visual SM Only; labelled “V). One version contained a synaesthetic metaphor in the linguistic, but not visual, advert elements (Linguistic SM Only; labelled “L”). The final version served as a control as a synaesthetic metaphor did not appear in the visual or linguistic advert elements (No SM; labelled “N”). These metaphor categories are illustrated by the example of Advert 2 (see Figure 1). In 2VL, the image displays a lemon wearing a studded mask whilst the slogan writes “A PLEASINGLY SHARP TASTE”. This synaesthetic metaphor, conveyed by the image and slogan, attributes the lemonade as having a sharp taste, which references the sensory modalities of touch (via “sharp” in the slogan, and the studded mask in the image) and taste (via “taste” in the slogan, and the lemon in the image). In 2V, the synaesthetic metaphor containing the image of 2VL is retained, however the slogan, “A PLEASINGLY SOUR TASTE”, no longer contains a synaesthetic metaphor since it a) is literal and b) only references one sense (via “sour taste”). In contrast, 2L retains the synaesthetic metaphor-containing slogan of 2VL (“A PLEASINGLY SHARP TASTE”) but contains a literal product image. The synaesthetic metaphor here therefore only appears in the linguistic advert elements. In 2N, the image of 2L and the slogan of 2V appear, meaning that a synaesthetic metaphor is not conveyed in either the visual or linguistic elements.\r\nThis process, of creating four versions per base advert, resulted in 32 advert stimuli. Within this, eight adverts, one per base advert, represented each metaphor category. The advert stimuli were labelled according to their base advert number (1-8) and their metaphor category (VL; V; L; N). For example, 1VL presents the version of base advert 1 belonging to the visual-linguistic SM category. The full stimuli set can be viewed in Appendix A. The synaesthetic metaphors constructed in the stimuli, and the sensory domains referenced (see Table 2), are briefly explained in Appendix B. All adverts were written in English and printed in full colour. \r\n\r\nOnline Survey\r\nThis research used a modified version of a Qualtrics (Provo, UT) survey produced by Chen (2019) and Pan (2019). The original survey featured 11 bipolar Likert scales per advert stimuli, all intended to contain 5-points but with some mistakenly containing 7-points. This was corrected in the present research, with all scales measured 0-5. The first four scales, measuring “Appreciation”, asked participants whether they liked the advert (Agree – Disagree) and whether they perceived it as “Bad”–“Good”; “Unpleasant”-“Pleasant”; and “Unappealing”-“Appealing”. The two following questions measured “Perceived Complexity” and concerned participants’ perception of the advert as “Unclear”–“Straightforward” and as “Difficult to Understand”– “Easy to Understand”. The next three questions measured “Purchase Intentions”. In the original survey, these focused on the purchase intentions of the respondent. This was modified in this research, following Pan (2019) and Chen’s (2019) finding that purchase intentions were merged with appreciation in PCA, and the belief that personal factors influence purchase intentions (Habich-Sobiegalla et al., 2019). The current survey instead asked respondents whether others would like to purchase the product, soon and in the future, and whether the advert would make others more likely to purchase the product (“Disagree”-“Agree”). On the final two questions, measuring “Perceived Realism”, participants rated the advert as “Unrealistic”–“Realistic” and “Fictitious”– “Real”. This question set was presented per advert stimulus, resulting in a total of 88 questions per survey. \r\n\r\nFigure 1\r\nThe Four Versions of Advert 2\r\nTable 2\r\nThe Sensory Domains Referenced by Each Advert, When Sensory Metaphors Were and Were Not Present \r\nSensory Domains Referenced\r\nSM Present No SM Present\r\nSource Target \r\nAdvert 1 Auditory Taste Taste\r\nAdvert 2 Tactile Taste Taste\r\nAdvert 3 Tactile Taste Taste\r\nAdvert 4 Visual Auditory Auditory\r\nAdvert 5 Visual Auditory Auditory\r\nAdvert 6 Visual Smell Smell\r\nAdvert 7 Auditory Taste Taste\r\nAdvert 8 Tactile Taste Taste\r\n\r\nDesign\r\nIn an independent groups design, participants were randomly assigned to complete one of four online surveys. The independent variable was the metaphor category of each advert. Each survey presented eight adverts, one belonging to each of the eight base adverts and two belonging to each of the four metaphor categories. For example, Survey 1 presented two Visual-SM only adverts (Adverts 1 and 5), two Linguistic-SM Only adverts (Adverts 2 and 6), two Visual-Linguistic SM adverts (Adverts 3 and 7), and two No-SM adverts (Adverts 4 and 8), with one version of each base advert appearing only once. Table 3 lists the advert stimuli presented per survey. The four dependent variables, of ‘Appreciation’, ‘Purchase Intentions’, ‘Perceived Realism’ and ‘Perceived Complexity’, are further detailed in Materials and Variable Construction.\r\n\r\n\r\n\r\n\r\n\r\n\r\nTable 3\r\nThe Adverts Displayed per Survey, In Order of Appearance\r\nSurvey 1 Survey 2 Survey 3 Survey 4\r\n1V 3N 5VL 7L\r\n2L 4V 6N 8VL\r\n3VL 5L 7V 1N\r\n4N 6VL 8L 2V\r\n5V 7N 1VL 3L\r\n6L 8V 2N 4VL\r\n7VL 1L 3V 5N\r\n8N 2VL 4L 6V\r\n\r\n\r\nProcedure\r\nThe entirety of this study was completed on Qualtrics (Provo, UT). Participants were informed of the researchers' background and requirements, and briefed of their anonymity, confidentiality and right to withdraw (Appendix C), before providing informed consent (Appendix D). Participants declared their age and gender and confirmed that English was their native language and that they did not suffer from any sensory inabilities. Participants viewed each of the eight adverts in turn and answered 11 five-point Bipolar Likert scales per advert (see Materials, Survey). Finally, participants were debriefed, reminded of their terms of participation, and provided with further reading (Appendix E). The study took 10 minutes to complete."]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2942"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2943"},["text","Data/Excel.xlsx"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2944"},["text","Davenport2021"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2945"},["text","Cameron Hoppu"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2946"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2947"},["text","Follow up on previous research in Francesca Citron's lab"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2948"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2949"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2950"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2951"},["text","Francesca Citron"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2952"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2953"},["text","Marketing"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2954"},["text","122, but 12 excluded so final sample of 110."]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2955"},["text","ANCOVA, ANOVA, Regression, and T-Test."]]]]]]]],["item",{"itemId":"143","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"135"},["src","https://www.johnntowse.com/LUSTRE/files/original/168c73959ed52a18ad7005f6a70fa065.csv"],["authentication","d70674b2d31093cc490b1257b76ace7e"]]],["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["itemType",{"itemTypeId":"14"},["name","Dataset"],["description","Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2961"},["text","Do trustworthiness judgements help people to recognise synthetic faces?"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2962"},["text","Haisa Shan"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2963"},["text","8 September 2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2964"},["text","Recent advances in digital image generative models have allowed for artificial creation of fake imagery such as synthesising highly photorealistic human faces. Style-based Generative Adversarial Networks (StyleGAN) is one of the most state-of-the-art generative models in this field, and has been widely used on facial image generation. However, with the increasing ease of using such image generative models, the security in many domains, such as forensic, border control and mass media, is vulnerable in front of the potential threats resulted from the misuse of image generative technologies. To date there has only been limited empirical research into the facial characteristics of StyleGAN-generated faces to support the design of detection methods against such synthetic faces. This study used StyleGAN2 (an improved version of StyleGAN) to generate faces and invited people to complete two facial image evaluation tasks, 1) Discrimination task, 2) Trustworthiness rating task. The study results demonstrated that, in the discrimination task, subjects had trouble recognising synthetic faces by direct/explicit judgement; while in the trustworthiness rating task, subjects perceived the synthetic faces as significantly more trustworthy than real faces. The study further analysed gender bias and ethnicity bias on the perception of facial trustworthiness, with results showing some differences between different levels of gender and ethnicity. In conclusion, people’s ability to recognise synthetic faces is poor, but it is possible that people rely on the perception of facial trustworthiness to discriminate synthetic from real faces. The findings in this study have implications for the development of detection methods against digitally generated faces."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2965"},["text","StyleGAN, synthetic face, trustworthiness perception, facial trustworthiness"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2966"},["text","Subjects and design\r\nThree hundred and fifty-seven subjects (114 males, mean age = 25.2, SD = 5.8; 227 females, mean age = 25.0, SD = 6.3; 10 non-binary, mean age = 23.6, SD = 8.93) were recruited to complete an online survey test delivered on www.qualtrics.com. The responses of subjects who started but did not complete the online survey were eliminated to avoid distorting the research results. We used computer-synthesised facial images in this research as fake faces, mixed with real faces to examine people’s ability to detect fake faces and perceptual differences of trustworthiness between real/fake faces. Subjects did not get rewards for their participation, though they could see the test score of their performances at the end of the survey. The Qualtrics survey was based on a within-subjects design in which all subjects viewed the same two sets of adult facial images and completed each of the two tasks. To eliminate the effect of between-sets difference, the use of each image sets was counterbalanced in the individual test for each subject. Before the survey started, all subjects provided informed consent and completed a demographic questionnaire about their age, gender, ethnicity. In terms of the experimental power of 0.8 and significance level of 0.05, with a small effect, the power calculation indicated that the study needed at least 198 subjects.\r\nStimuli\r\nA total of thirty-two human facial images (1024×1024 resolution), including 16 real and 16 synthetic faces, were used as stimuli in the survey. All real faces were taken from a publicly available dataset for high-quality human facial images, Flickr-Faces-HQ (FFHQ), which is created as a benchmark for GAN (see https://github.com/NVlabs/ffhq-dataset), and all synthetic faces were gained from the dataset of the generative image modeling, StyleGAN2 (see https://github.com/NVlabs/stylegan2). To ensure a diverse dataset, in each of the two sets of faces, there were 4 Black, 4 East Asian, 4 South Asian, 4 White, and 2 males and 2 females for each ethnicity. Among the sixteen faces of each set, half of them were real and half were synthetic, but this was unknown to subjects.\r\nProcedure\r\nFirst, subjects completed a short questionnaire for demographic information (age, gender, ethnicity), and subjects had to be 18 years of age or older to take part. Prior to the main body of test, there was an example of real and synthetic faces presented to provide subjects with a general impression of what real and synthetic faces look like. Subjects then were asked to complete two face evaluating tasks, 1) Discrimination Task, 2) Trustworthiness Rating Task. The two tasks were presented to subjects in a counterbalanced order to check for any possible order effects. Before the start of each task, participants were informed that they would see a series of 16 facial images, and that they had to carry out their evaluation following the instructions provided. In both tasks, only one image was presented at a time and individual images appeared in a random order.\r\nIn the discrimination task, participants made their decision between two choices, “real” or “synthetic”, to classify the 16 faces according to whether they thought the presented faces were real or not. Subjects did not receive immediate feedback during the task on the correctness of their classifications. In this task, subjects relied on direct/explicit judgments. In the trustworthiness rating task, subjects were required to rate how trustworthy they thought each of 16 faces looked using a 7-point Likert scale (1 = extremely untrustworthy; 4 = neither untrustworthy nor trustworthy; 7 = extremely trustworthy). We instructed subjects that they did not need to consider face authenticity in this task, and they could just assume that the faces shown to them were all of real people. Although there was no time limit to respond for trustworthiness rating, we encouraged subjects to rely on their intuitions and provide their responses to work as quickly as possible. In this task, we expected to trigger a relatively indirect/implicit approach to evaluate faces as compared to direct/explicit judgement on face authenticity, specifically by trustworthiness perception. At the end of the survey, subjects saw a result report of their own mean trustworthiness rating scores for real and synthetic faces, and their mean accuracy in classifying real and synthetic faces in the discrimination task."]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2967"},["text","Haisa Shan"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2968"},["text","data/Excel.csv"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2969"},["text","None"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2970"},["text","Haisa Shan"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2971"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2972"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2973"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2974"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2975"},["text","None"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2976"},["text","Sophie Nightingale"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2977"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2978"},["text","Cognitive, Perception; Forensic; Social"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2979"},["text","357 Participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2980"},["text","ANOVA; Power Analysis; T-Test"]]]]]]]],["item",{"itemId":"144","public":"1","featured":"0"},["collection",{"collectionId":"11"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"987"},["text","Secondary analysis"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2981"},["text","The Effects of Different Sleep Stages on Language Learning Tasks in Young Adults"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2982"},["text","Carly Power"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2983"},["text","2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2984"},["text","In order to learn a language, one must practice multiple tasks, including speech segmentation and generalisation. Segmenting speech allows for the identification of words and learning the meaning as well as syntactic role of those words within phrases and sentences. Novel generalisation requires generalising over the structure of a new language not yet experienced. Frost and Monaghan (2016) showed that participants were able to use the same statistical information at the same time to complete both language tasks. They suggest that segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms. The role of sleep for learning to segment and generalise language is still unclear. Sleep affects memory consolidation, which is necessary for learning a novel language. This refers to the amount of sleep individuals get within their sleep cycle, yet it is unknown whether the duration of separate sleep stages has an effect. The declarative/procedural (DP) model by Ullman (2004) on learning provides distinctions in DP memory that associate with slow-wave sleep (SWS) and rapid-eye movement (REM) sleep respectively. SWS has a role in declarative memory processes, including memory for words and grammar. Rapid-eye movement (REM) sleep has a role in procedural memory processes, involving motor skills and coordination. Sleep spindle density should also be considered, as spindles are involved in offline information processing and information transfer. It was found that increased SWS and stage 2 spindle density have a positive effect on speech segmentation compared to generalisation. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2985"},["text","Language learning, novel generalisation, REM, sleep, sleep spindle density, sleep stages, speech segmentation, SWS"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2986"},["text","Participants \r\n\r\nThe original experiment was completed by 54 participants, 8 males and 46 females, with an age range of 18-24-years-old (mean age = 18.52). All participants reported being native-English speakers, with no history of auditory, speech or language disorders known. All participants either received university course credit or £20 for completing the experiment. Observations may be excluded for the first linear mixed-effects model. Exclusions may come from participants in the sleep group who did not sleep during the permitted time. This is because the first analysis aims to compare sleep vs. wake. The same participants’ data will be kept for the other linear mixed-effects models which aims to compare duration of sleep stages. This research received ethical approval by Dr Padraic Monaghan and Lancaster University’s Psychology Department on 22/04/2021. \r\n\r\nDesign \r\n\r\nThis study had a between-participants design with two conditions: sleep vs. wake between training and testing, and test type. There were two test types of speech segmentation and novel generalisation. Participants were randomly allocated to the sleep or wake conditions, and split evenly, meaning 27 participants slept and 27 remained awake. This study had access to PSG data for 18 of the participants in the sleep group. All participants received both test types. All participants were provided with an information sheet and gave written consent before the study commenced. \r\n\r\nMaterials \r\n\r\nStimuli \r\n\r\nUsing the Festival speech synthesiser (Taylor et al., 1998), speech stimuli were created that were based on similar stimuli used by Peña et al. (2002). This artificial training language contained monosyllabic items, of which there were nine (pu, ki, be, du, ta, ga, li, ra, fo), used to form three different non-adjacent pairings with three possible X items in-between (A1X1–3C1, A2X1–3C2, and A3X1–3C3) (Frost & Monaghan, 2016). Using Peña et al.’s (2002) study, A and C items contained plosive phonemes (pu, ki, be, du, ta, go) and X items contained continuants (li, ra, fo). All AXC item strings had a duration of approximately 700ms. Any preferences – for dependencies not due to the statistical structure of the sequences – were controlled for by generating eight versions of the language. Each version had randomly assigned syllables to A and C items, and the same X items were used in all versions. These versions of the language were counterbalanced across both task types. When testing for novel generalisation, three additional syllables were used with continuant phonemes (ve, zo, thi) (Frost & Monaghan, 2016). Research on the similarities in phonological properties of non-adjacent dependent syllables has shown that these similarities show support for acquisition of such nonadjacencies (Newport & Aslin, 2004). Nonetheless, other research has found that they are not essential for language learning to occur (Onnis et al., 2004). Words in the same grammatical category tend to be coherent regarding phonological properties (Monaghan et al., 2007), so regardless of learning, this property of the artificial language used within this study is consistent with natural language, which allows for real-life implications. \r\n\r\nTraining \r\n\r\nThe speech stimuli were formed into a 10.5-minute-long continuous speech stream by stringing together the AXC words within the language. It was ensured that no Ai_Ci dependencies were repeated immediately after each other. The speech stream included 5s fades for the onset and offset of speech, which ensured that such a feature of speech could not be used as a language structure cue (Frost & Monaghan, 2016). \r\n\r\nTesting \r\n\r\nSegmentation: part-words were trisyllabic items that were heard in the training speech stream but overlapped word boundaries. As such, part-words comprised of either the last syllable of one word and the first two syllables of the next (CiAjX), or the last two syllables of one word and the first syllable of the next (XCiAj). For all nine AXC items, both part-word types were created. 18 test pairs were constructed which participants listened to, by matching each part-word with its corresponding word (for example, the A1X2C1 item was paired with the X2C1A2 part-word) (Frost & Monaghan, 2016). \r\nNovel generalisation: nine forced choice tests included a rule-word which contained one of three novel syllables (ve, zo, thi) (AiNCi), where N is the novel syllable and a novel part-word. For each Ai_Ci dependency, each novel rule-word appeared once. Part-words were made of two syllables that were heard in the training task, in their respective positions, with the same novel syllable as in the rule-word sequence (Frost & Monaghan, 2016). This novel syllable could appear in any position (first NCiAj, second XNAi, or third CiAjN) and each novel syllable occurred once in each of these positions. Rule-word and part-word novelty presence controlled for the effect of the novel syllable, yet the novel generalisation task still tested for generalisation of the non-adjacent structure of items within speech (Frost & Monaghan, 2016). Randomisation of test-pairs in all conditions was ensured across all participants, including the position of the correct response in each test-pair, to reduce response bias. When listening to the test-pairs, items in each pair were separated by a 1s pause. All participants completed The Stanford Sleepiness Scale (SSS) (Hoddes et al., 1972). This was in order to note participant sleepiness before the period of sleep or wake. The SSS consists of one item on a scale of seven statements, within which participants were required to select one statement that best described their perceived level of sleepiness (Shahid et al., 2011) (see Appendix A). Participant responses in the testing task were excluded if 90% of responses were always “1” or “2”, or if responses alternated between “1” and “2”. \r\n\r\nProcedure \r\n\r\nThe whole procedure lasted for a three-hour period. For the training task, all participants listened to the continuous stream of speech and were instructed to pay attention to the language and think of possible words it contains. After the training task was complete, participants were split into two groups for the sleep vs. wake condition. Half of the participants, the sleep group, were given an hour and 45 minutes to sleep. These participants slept at Lancaster University Psychology Department’s sleep lab, and their sleep was monitored using polysomnography (PSG). PSG and an Embla N7000 system can record the amount of time spent in each sleep stage, and sleep spindle density, with EEG sites: O1, O2, C3, C4, F3, and F4 referenced against M1 and M2. The other half of participants remained awake for the same duration, watching a non-verbal, emotionally neutral video with neutral music. The testing task was then given to all participants after the same amount of time, 15 minutes after the break period. All participants were then required to complete the testing forced choice tasks. Within each trial, participants listened to a test-pair of items and were instructed to select which item best matched the training language. A response of “1” for the first item or “2” for the second item on a computer keyboard was recorded. All participants listened to the speech using closed-cup headphones in a quiet room (Frost & Monaghan, 2016). To test speech segmentation, participants completed a forced choice task on preference for word/part-word comparisons. To test novel generalisation, participants completed a similar forced choice task for rule-word/part-word preference.\r\n\r\nData analysis\r\n\r\nAnalysis included mixed-effects models to allow for random participant and item variability. As all participants responded to both task types, therefore multiple items, the likelihood of correlations in responses from the same participant and to the same item increases. Generalised linear mixed-effects allow for a more flexible approach compared to ANOVA, that can handle missing data better, without significantly losing statistical power. Participant and item variation, the effects of sleep/wake, test type, and sleep stage duration were all considered. The interactions between sleep/wake and test type, and sleep stage duration and test type, were also considered in separate models. "]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2987"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2988"},["text","Data/Excel.csv\r\nData/Excel.xlsx\r\nAnalysis/r_file.R"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2989"},["text","Power2021"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2990"},["text","Brad Hudson"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2991"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2992"},["text","Secondary data analysis. Data were originally collected for the paper below, but they were not analysed by the authors.\r\nFrost, R. L. A., & Monaghan, P. (2016). Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech. Cognition, 147, 70- 74"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2993"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2994"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2995"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2996"},["text","Prof. Padraic Monaghan"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2997"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2998"},["text","Cognitive, developmental, neuropsychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2999"},["text","54"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3000"},["text","Linear mixed effects modelling, correlation, sleep data analysis"]]]]]]]],["item",{"itemId":"145","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"136"},["src","https://www.johnntowse.com/LUSTRE/files/original/78ebb8c54e3cbdb306df0d2337a3ee7a.pdf"],["authentication","eff2d992759a35de11f501a68f43047f"]]],["collection",{"collectionId":"6"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"187"},["text","RT & Accuracy"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"188"},["text","Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes"]]]]]]]],["itemType",{"itemTypeId":"14"},["name","Dataset"],["description","Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3001"},["text","Age-Related Changes in the Attentional Modulation of Temporal Binding "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3002"},["text","Jessica Pepper"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3003"},["text","8th September 2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3004"},["text","In multisensory integration, the time range within which visual and auditory information can be perceived as synchronous and bound together is known as the temporal binding window (TBW). With increasing age, the TBW becomes wider, such that older adults erroneously, and often dangerously, integrate sensory inputs that are asynchronous. Recent research suggests that attentional cues can narrow the width of the TBW in younger adults, sharpening temporal perception and increasing the accuracy of integration. However, due to their age-related declines in attentional control, it is not yet known whether older adults can deploy attentional resources to narrow the TBW in the same way as younger adults.\r\nThis study investigated the age-related changes to the attentional modulation of the TBW. 30 younger and 30 older adults completed a cued-spatial-attention version of the stream-bounce illusion, assessing the extent to which the visual and auditory stimuli were integrated when presented at three different stimulus onset asynchronies, and when attending to a validly-cued or invalidly-cued location. \r\nA 2x2x3 mixed ANOVA revealed that when participants attended to the validly-cued location (i.e. when attention was present), susceptibility to the stream-bounce illusion decreased. However, crucially, this attentional manipulation affected audiovisual integration in younger adults but not in older adults. Whilst no definitive conclusions could be drawn about the width of the TBW, the findings suggest that older adults have multisensory integration-related attentional deficits. Directions for future research and practical applications surrounding treatments to improve the safety of older adults’ perception and navigation through the environment are discussed. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3005"},["text","Ageing, attention, TBW, multisensory integration, stream-bounce illusion"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3006"},["text","Participants\r\nThis study used a total of 60 participants; 30 younger adults (15 males, 15 females) between 18-35 years old (M = 21.37, SD = 1.30) and 30 older adults (11 males, 19 females) between 60-80 years old (M = 67.91, SD = 4.71). This sample size was determined via an a-priori power analysis using the data of Donohue et al. (2015) and Chen et al. (2021), who conducted similar experiments (see pre-registration on www.aspredicted.com, project ID #65513). All participants were fluent English speakers. Participants were required to have normal or corrected-to-normal vision. Participants were ineligible to proceed with the experiment if they had a history or current diagnosis of neurological conditions (e.g. epilepsy, mild cognitive impairment, dementia, Parkinson’s Disease) or learning impairments (e.g. dyslexia), or had severe hearing loss resulting in the wearing of hearing aids.\r\nParticipants were recruited via opportunity sampling; the majority of younger participants were students at Lancaster University and were known to the researcher, whilst the majority of older participants were members of the Centre for Ageing Research at Lancaster University. All participants were able to provide informed consent. \r\n\r\nPre-screening tools\r\nParticipants were asked to complete two pre-screening questionnaires using Qualtrics survey software (www.qualtrics.com), to assess their eligibility for the study.\r\nSpeech, Spatial and Quality of Hearing Questionnaire (SSQ; Appendix A; Gatehouse & Noble, 2004). Participants rated their hearing ability in different acoustic scenarios using a sliding scale from 0-10 (0=“Not at all”, 10=“Perfectly”). Whilst, at present, no defined cut-off score on the SSQ is available as a parameter to inform decision-making, previous studies have indicated that a mean score of 5.5 is indicative of moderate hearing loss (Gatehouse & Noble, 2004). As a result, people whose average score on the SSQ was lower than 5.5 were not eligible to participate in the experiment.\r\nInformant Questionnaire on Cognitive Decline in the Elderly (IQ-CODE; Appendix B; Jorm, 2004). Participants rated how their performance in certain tasks now has changed compared to 10 years ago, answering on a 5-point Likert scale (1=“Much Improved”, 5=“Much worse”). An average score of approximately 3.3 is the usual cut-off point when evaluating cognitive impairment and dementia (Jorm, 2004), therefore people whose average score was higher than 3.3 were not eligible to participate in the experiment. \r\nThe mean scores of each pre-screening questionnaire are displayed in Table 1. An independent t-test revealed that there was no significant difference between age groups on the SSQ questionnaire [t(58) = -1.15, p=.253]; however, there was a significant difference between age groups on the IQ-CODE questionnaire [t(58) = -13.29, p<.001].\r\nTable 1\r\nMean scores on the SSQ and IQ-CODE pre-screening questionnaires, for both younger and older adults. Standard deviations displayed in parentheses.\r\nAge group\tSSQ\tIQ-CODE\r\nYounger\t8.34\r\n(1.10)\t1.74\r\n(0.51)\r\nOlder\t8.67\r\n(1.13)\t3.03\r\n(0.09)\r\n\r\n\r\nExperimental Design\r\nThis research implemented a 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(Stimulus Onset Asynchrony [SOA]: Visual Only [VO] vs 0 milliseconds vs 150 milliseconds vs 300 milliseconds) mixed design, with Age as a between-subjects factor and Cue and SOA as within-subjects factors.\r\nThe experiment consisted of 16 different trial conditions (Table 2), randomised across all participants. Replicating the paradigm used by Donohue et al. (2015), the experimental block contained 72 validly-cued trials and 24 invalidly-cued trials, which were equally distributed between each side of the screen (left/right) and SOA conditions; this means that each participant completed 144 valid trials and 48 invalid trials for each SOA.  \r\n\r\n\r\nTable 2\r\nNumber of trials within each Cue and SOA condition. All participants completed a total of 768 trials.\r\nSOA (ms)\tCue\r\n\tValid (Left)\r\nN\tValid (Right)\r\nN\tInvalid (Left)\r\nN\tInvalid (Right)\r\nN\r\n0\t72\t72\t24\t24\r\n150\t72\t72\t24\t24\r\n300\t72\t72\t24\t24\r\nVO\t72\t72\t24\t24\r\n\r\n\r\nStimuli and Materials\r\nParticipants completed the experiment remotely, in a quiet room on a desktop or laptop computer with a standard keyboard. All participants were asked to wear headphones/earphones. A volume check was conducted at the beginning of the experiment; participants were presented with a constant tone and asked to adjust the volume of this tone to a clear and comfortable level. \r\nThe stimuli used in the task were replicated from Donohue et al. (2015). Each trial started with an attentional cue in the centre of the screen – a letter “L” or a letter “R” instructing participants to focus on the left or the right side of the screen. In addition to this, 2 pairs of circles were positioned at the top of the screen, one pair in the left hemifield and one pair in the right hemifield. The attentional cue lasted for 1 second, and 650 milliseconds after this cue disappeared, the circles in each pair started to move towards each other downwards diagonally (i.e. the two left circles moving towards each other and the two right circles moving towards each other). \r\nIn the trials, one pair of circles moved towards each other, intersected, and continued on the same trajectory (fully overlapping and moving away from each other). This full motion of the circles formed an “X” shape, with the circles appearing to “stream” or “pass through” each other. On the opposite side of the screen, the other pair of circles stopped moving before they intersected, forming half of this “X” motion. On 75% of the trials, the full “X”-shaped motion appeared on the side of the screen that the cue directed participants towards (validly-cued trials); on the other 25% of trials, the full motion occurred on opposite side of the screen to where the cue indicated, and the stopped motion occurred at the cued location (invalidly-cued trials).\r\nIn addition to these visual stimuli, on 75% of the trials, an auditory stimulus was played binaurally (500Hz, 17 milliseconds), either at the same time as the circles intersected (0ms delay), 150ms after the intersection or 300ms after the intersection. The remaining 25% of the trials were visual-only (i.e. no sound was played). Participants were told that regardless of whether a sound was played, they must make their pass/bounce judgements based on the full motion of the circles (the “X” shape), even if the full motion occurred at the opposite side of the screen that they were attending to. \r\nThe experiment ended after all 768 trials – participation lasted approximately 1 hour. The experiment was built in PsychoPy2 (Pierce et al., 2019) and hosted by Pavlovia (www.pavlovia.org). \r\n\r\nProcedure\r\nPrior to the experiment, a brief meeting was organised between the participant and the researcher via Microsoft Teams, to explain the task and answer any questions. Participants were emailed a link to a Qualtrics survey, which included the participant information sheet, consent form, demographic questions and pre-screening questionnaires. If the person was deemed eligible to take part in the experiment, Qualtrics redirected participants to the experiment in Pavlovia.\r\nParticipants were then presented with instructions detailing the attentional cue elements of the task and asking them to base their judgements on the full X-shaped motion of the stimuli. Participants were asked to press M on the keyboard if they perceived the circles to “pass through” each other or press Z if they perceived the circles to “bounce off” each other, answering as quickly and as accurately as possible. \r\nParticipants completed a practice block of 10 trials, then the test session commenced. After each set of 10 random trials, participants had the opportunity to take a break. Participants were provided with a full debrief upon completion of the experiment, and all participants could enter a prize draw to win one of two £50 Amazon vouchers.\r\n\r\nStatistical Analyses\r\nThis study required two separate mixed ANOVAs to analyse main effects and interactions, investigating significant differences between groups and conditions.\r\nReaction Times. \r\nFor the first dependent variable of reaction times (RT), mean RTs were calculated for each participant in each Cue x SOA condition, representing the time taken, in milliseconds, for each participant to press M or Z on the keyboard at the end of each trial. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 4(SOA: 0ms vs 150ms vs 300ms x Visual-Only) mixed ANOVA was then conducted on these mean RTs. \r\nBounce/Pass Judgements. \r\nFor the second dependent variable of the bounce/pass judgements, the percentage of “Bounce” responses provided in each Cue x SOA condition was calculated for each participant. A 2(Age: Younger vs Older) x 2(Cue: Valid vs Invalid) x 3(SOA: 0ms vs 150ms vs 300ms) mixed ANOVA was then conducted on these percentage data. Visual-Only (VO) trials were compared separately for valid and invalid conditions using a paired samples t-test. Post-hoc paired samples t-tests were also used to investigate significant differences between the 0ms, 150ms and 300ms SOA conditions. \r\nBounce/Pass Judgements: Pairwise comparisons. To analyse pairwise comparisons in the significant interaction of Age and Cue, responses in each SOA condition were collapsed – that is, a grand mean percentage of “Bounce” responses was calculated by averaging the percentage of “Bounce” responses in the 0ms, 150ms and 300ms trials in the Valid condition and in the Invalid condition. This produced an overall Valid and an overall Invalid mean percentage of “Bounce” responses for each participant. A 2(Age: Younger vs Older) x 2(Collapsed Cue: Valid vs Invalid) mixed ANOVA was conducted on this collapsed data to investigate differences between the proportion of “Bounce” responses in the Valid and Invalid condition for younger adults, and in the Valid and Invalid condition for older adults. In addition, 2 separate one-way ANOVAs were conducted on this collapsed data (Age as the between-subjects factor, and Valid or Invalid as the within-subjects factor) to investigate differences between younger and older adults in the Valid condition, and differences between younger and older adults in the Invalid condition (Laerd, 2015). \r\nSignificance. \r\nAn alpha level of .05 was used for all statistical tests. Any responses (judgements or RTs) that were ±3 standard deviations from the mean were considered anomalous and were removed from the analyses. Mauchly’s test of sphericity was violated for the main effect of SOA, therefore Greenhouse Geisser adjusted p-values were used where appropriate. As an a-priori power analysis determined the desired sample size for this study, and this sample size was achieved, non-significant results will not be due to the study being underpowered. Statistical analyses were conducted using SPSS (version 25, IBM)."]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"3007"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3008"},["text","Data/SPSS.sav; Data/Excel.xlsx"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3009"},["text","Pepper2021"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3010"},["text","Robert Taylor"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3011"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3012"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3013"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3014"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"3015"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"3016"},["text","Dr Helen Nuttall"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"3017"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"3018"},["text","Cognitive, Perception"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"3019"},["text","60 participants - 30 younger adults and 30 older adults"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3020"},["text","ANOVA"]]]]]]]],["item",{"itemId":"146","public":"1","featured":"0"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3021"},["text","The validity of traditional readability tests on accurately predicting people’s comprehension of health information"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3022"},["text","Jiawen Liu"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3023"},["text","2015"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3024"},["text","Tons of evidence indicated that readers benefit from clear and understandable health information in various contexts. Authors have been looking forward to utilizing a wide range of readability formulas so that they can produce comprehensible texts for readers. Both traditional readability formulas and the new Coh-Metrix algorithms have been widely used for decades and the utilities for the new tool were more likely to be supported by theoretical evidence. Nevertheless, there is still a lack of empirical evidence supporting the utilities of the two kinds of readability formulas. In this paper, a secondary data analysis was utilized to give empirical evidence to whether the widely used readability tests can predict participants’ comprehension responses effectively. By using Bayesian generalized linear mixed-effects models, variation in both traditional readability formulas and two of the new Coh-Metrix algorithms were tested having little or no effect on variation in participants’ comprehension accuracy. In this case, it is suggested that researchers in the future should think twice before utilizing the readability tests to analyse text difficulty."]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3025"},["text","Participants\r\nParticipants recruited in the original study were through the Prolific online platform. Participants recruited were all UK nationals who were aged eighteen or over and spoke English as their first language. Participants who completed the test battery were awarded £12.50 (equalling £6.25 per hour). All participants who volunteered were tested, with exclusion of participants whose reading times for the health-related information texts were recorded being below 30s. The reading time includes reading the text and answering the questions relating to that text, including the self-rated evaluation-of understanding probe. While participant recruitment was administered through the Prolific platform, response data collection was conducted through a Qualtrics survey for each study. \r\nDesign \r\nTwo studies were conducted in the original research. In Study One, participants were presented with a sample of written health information texts on a range of topics. The observation was replicated and extended in Study Two by presenting a sample of texts on a range of health topics, together with a sample of guidance texts on COVID-19. In both studies, participants were asked to complete four multiple-choice questions, each with three answer options, in response to each stimulus health text. After the comprehension test questions, participants were asked to rate how well they thought they understood the information in the guidance. The original dataset also included individual differences, including reading skill and knowledge, and collected information on text attributes. Responses by participants in terms of the comprehension of the four multiple-choice questions for each text and the individual differences, such as reading skills and knowledge, would be utilized in the current study analysis with more kinds of text attributes included. In sum, except the heath-text materials picked to test participants’ comprehension responses and participants chosen, all other variables and procedures were identical in both studies. Since the difference between the two datasets in the two studies was the inclusion of texts on COVID-19 in Study Two, and all variables included in both datasets were identical, both data were renamed as Dataset One and Dataset Two in order to more easily distinguish between the two. \r\nMaterial \r\nFor Dataset One (Study One in the original data), 25 health-related information texts were collected from those available on NHS trust organization webpages. The texts collected were chosen from 115 candidate texts from those available among the web resources of a quasi-random sampling of 23 NHS England trusts (10% of the 228 total in England). For Dataset Two (Study Two in the original data), 14 texts concerning a range of health matters and 15 texts concerning COVID-19 or guidance relating to the public health response to the pandemic were collected. As in Dataset One, the general health texts were selected as a sub-set of a (fresh) pool of 115 candidate texts extracted from those available among the web resources of a (new) sample of 23 NHS England trusts. The COVID texts were selected from a pool of 115 candidate texts extracted from those available from gov.uk, charity (British Heart Foundation, Cancer Research UK), NHS UK, and NHS England trust webpages. The selection of texts, for both general health and COVID-19 information, was made so that the sub-set of items varied as widely as possible across the distribution of values (for each pool of candidates) on each critical text feature. For each text chosen, a set of four multiple-choice questions (MCQs) was constructed, each with three answer options, to testify participants’ comprehension levels. \r\nIndividual differences measured: vocabulary knowledge, health literacy, reading comprehension skill, and reading strategy: \r\nVocabulary knowledge. The Shipley vocabulary sub-test was used to estimate vocabulary knowledge (Kaya et al., 2012). Participants were required to choose the synonymous word from four alternatives to a target stimulus word in The Shipley test (the other three alternatives are semantically related or unrelated distractor words). Participants were associated with a test result corresponding to the total number of correct answers out of 40 multiple-choice items. \r\nHealth literacy. The Health Literacy Vocabulary Assessment (HLVA) was used to estimate health literacy. Participants were required to choose the synonymous word from four alternatives to a target stimulus word and all the items are under health contexts. Since the vocabularies presented were drawn from the health-care profession, the HLVA is designed to test participants’ background knowledge of health matters and is considered an index of health literacy. Participants were associated with a test result corresponding to the total number of correct answers out of 16 multiple-choice items. \r\nReading skill. The Qualitative Reading Inventory (Leslie & Caldwell, 2017) was used to assess reading skills. Participants were asked to read a short factual text (compromised of 802 words) about the life cycle of stars and then answer two sets of 10 open-class questions related to the text, respectively. The questions not only included information that can be found explicitly in the text but also information that requires inference from background knowledge. Participants were associated with a QRI score corresponding to the total number of correct answers out of 20 open-class questions. \r\nReading strategy. A Reader-based standards of coherence measure published in a doctoral paper by Calloway (2019) was used to assess reading strategy. Participants were asked to complete a 5-point Likert scale based on their reading experience ranging from very untrue to very true. The scale includes 87 items and is supported to measure readers’ reading goals and learning strategies effectively. Participants were associated with a scale score corresponding to their response on the 87-item scale. \r\nText features measures: traditional readability tests scores, coh-metrics scores of the health-related information texts presented to participants: \r\nReferential Cohesion. The Coh-Metrix tool was used to calculate the referential cohesion (co-reference) of texts. Referential cohesion emphasises the overlap degree of concepts, words, and pronouns between sentences and paragraphs. With the increase of the similarities of sentences and conceptual ideas within a text, it is easier for readers to make connections between ideas and sentences (Coh-Metrix, 2012). Nevertheless, low referential texts sometimes are necessary when readers are required to be more actively involved in comprehending a text (Coh-Metrix, 2012). \r\nDeep cohesion. The Coh-Metrix tool was used to calculate the deep cohesion of texts. Deep cohesion refers to how well a text is tied together by an efficient number of cohesion ties, also called connectives (Coh-Metrix, 2012). The calculation of deep cohesion in a text is determined by the number of the connectives including time, causal, additive, logical and adversative connectives, which connect ideas and propositions and clarify relations in a text (R-Kintsch & Walter Kintsch, 1998). Being able to utilize the connectives effectively helps to tie the information together; thus, it facilitates the readers’ understanding. \r\nFlesch Reading Ease Score (FRE). The FRE (Badarudeen & Sabharwal, 2010) is one of the traditional readability tests. The formula for the FRE is 206.835 - (1.015 * ASL) - (84.6 * ASW), where ASL represents the average sentence length and ASW represents the average number of syllables per word. The FRE evaluates texts on a 100-point scale and higher scores means that it is more difficult to comprehend the text. \r\nThe Gunning Frequency of Gobbledygook (FOG). The FOG (Roberts et al., 1994) is one of the traditional readability tests. The formula for the FOG is 0.4*(ASL + % polysyllabic words), where ASL represents the average sentence length. There is a minimum word count for the passages tested using FOG, more than 100 words, and the results given correspond to the education level that a reader needs to comprehend a text. \r\nThe Flesch–Kincaid Grade Level (FKG). The FKG (Woodmansey, 2010) is one of the traditional readability tests. The formula for the FKG is (0.39*ASL) + (11.8*ASW) - 15.59, where ASL represents the average sentence length and ASW represents the average number of syllables per word. The results given from the FKG provide a number indicating the specific grade that readers should achieve to comprehend the text, which ranges from grades 3 to 12. \r\nSimple Measure of Gobbledygook (SMOG). The SMOG (McLaughlin, 1969) is one of the traditional readability tests. The formula provided is 1.043 * square root of (number of polysyllabic words * [30/number of sentences] + 3.1291). The SMOG also provides a school grade as a result, indicating the specific education level a reader should have to understand a text, and it was recommended by the National Cancer Institute as having a better performance than the other tests. \r\nDemographic attributes. Participants’ demographic characteristics were recorded, including gender (coded: Male, Female, non-binary, prefer not to say), education (coded: Secondary, Further, Higher), and ethnicity (coded: White, Black, Asian, Mixed, Other). "]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"3026"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3027"},["text","Data/Excel.csv\r\nData/R.r\r\nData/DS_Store"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3028"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3029"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3030"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3031"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"3032"},["text","LA1 4YF"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3039"},["text","Mistry, Daniel\r\nLin, Pei-Ying"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3040"},["text","Liu2015"]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3041"},["text","Vocabulary knowledge, health literacy, reading comprehension skill, reading strategy"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"3033"},["text","Robert Davies"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"3034"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"3035"},["text","Cognitive"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"3036"},["text","307 participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3037"},["text","Bayesian analysis"]]]]]]]],["item",{"itemId":"147","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"137"},["src","https://www.johnntowse.com/LUSTRE/files/original/f32d9fb1ed51218774543381b3025654.xlsx"],["authentication","9d383cde2bea34174cef2f6b085935ca"]]],["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["itemType",{"itemTypeId":"14"},["name","Dataset"],["description","Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3042"},["text","                             Do inward and outward consonants and vowels\r\nhave different effects on customer’s liking rates\r\ntowards the brand names?\r\n"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3043"},["text","Keung Wang Shan"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3044"},["text","5/9/2022"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3045"},["text","The origin of speech development starts with the way that infants or children produce their first words. In the early stage of speech acquisition, children tend to produce particular syllables that are low in energy to produce, such as intrasyllabic and intersyllabic consonant-vowel co-occurrence patterns (MacNeilage et al., 2000). Such patterns may have an effect on individual’s preference for words later in life, such as for brand names. More pointedly, according to Topolinski et al. (2014), there is an in-out effect which significantly affect individual’s liking rates towards the brand names that contain inward and outward consonants. However, previous findings have only focused on such effects on consonants, whereas there is insufficient research on the combination effects of consonants and vowels on brand names. Therefore, this study is designed to investigate whether such in-out effects of both consonants and vowels of English brand names have association with customer’s emotional response to the words, as well as whether the involvement of MacNeilage syllables in the brand names are associated with customer’s liking rate. The whole experiment was conducted through an online questionnaire consisting of 360 sound stimuli to test on participant’s liking rate towards the brand names which are non-words with the combination of inward and outward consonants and vowels, and Macneilage syllables. Results of the study showed that liking rates towards the brand names are significantly increased for the ones that include inward consonants and vowels, while lower liking rates were associated with outward consonants and vowels. Not to mention, no significant relationship was found between the number of MacNeilage syllables and one’s preference towards the brand names, yet individuals had higher preference for brand names that contained MacNeilage syllables as the first syllable of the word. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3046"},["text","Consonants, vowels, MacNeilage syllables, brand names, liking rates"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3047"},["text","Participants\r\nA total of 51 participants who spoke different first languages were recruited through researcher’s family and friends as well as invited via SONA. They were all healthy individuals with normal vision and hearing, all aged 18 or above with no health conditions. The participants included 23 males and 28 females, with the age range from 22 to 28 and a mean age of 23.33, SD=.\r\nMaterials\r\nThe study was carried out as an online questionnaire which consisted of four open ended questions at the beginning and 360 questions with a 10-point Likert scale to display the answers. The whole questionnaire was based on the liking rate of the brand names that were presented as sound stimuli displayed in the questionnaire. The first four open ended questions were designed to ask participants’ age, gender, first language and whether they speak other languages (see Appendix D). Next, 360 questions each containing an audio of a sound stimulus that was between one to three seconds were presented in the questionnaire (see Appendix D). All sound stimuli were recorded by the researcher’s supervisor who was a native English speaker with a Northern English accent with training in phonology beforehand, which were also produced in a monotone. Within the 360 sound stimuli, they were divided into six different sets which included six combinations of inward and outward consonants and vowels. The total six sets of stimuli included nonwords that contained consonants that required the articulation from front to the middle to back of the mouth (inward) (FMB), from front to back to middle (FBM), from middle to front to back (MFB), from middle to back to front (MBF), from back to middle to front (outward) (BMF) and from back to front to middle (BFM). There was a total of 60 stimuli with the same articulation of consonants and different articulation of vowels in each set, and 10 stimuli with the same articulation of both consonants and vowels in each set. Within each set of the same articulation of consonants, six possible combinations of front/middle/back vowels were paired up with the consonants to create the stimuli so that every possible arrangement of front/middle/back consonants and vowels was tested in the questionnaire. Moreover, among the 360 stimuli, 120 of them contained zero MacNeilage syllables, 178 of them contained one MacNeilage syllables while 62 of them contained three MacNeilage syllables. To ensure that there was no personal bias towards the brand names, all stimuli were nonwords that were created by the researcher so that participants would not be familiar with any of the brand names.\r\nProcedure\r\nBefore the study began, all participants were sent a participant information sheet and consent form through email (see Appendix A & B). Participants were then also given a link to the online questionnaire which was attached in the same email. At the beginning of the questionnaire, four open-ended questions on personal information were presented and participants were asked to answer their age, gender, first language and whether they speak other languages (see Appendix D). After completing the four questions, participants had to answer 360 questions with each containing an audio of a sound stimuli, which were referred as brand names in this survey. Each question was displayed as ‘how much do you like this brand name’ and participants were asked to rate each sound stimuli according to their preference on the 10-point Likert scale, labelled as 1 as the lowest and 10 as the highest (see Appendix D). There was a ‘play’ button in every question where participants could play the sound stimulus and they were allowed to play the audio as many times as they prefer if they wished. In the questionnaire, five questions were presented on each page and there was 73 pages in total, including one page in the beginning for the four open-ended questions. The 360 questions on the sound stimuli were presented in randomised order for each participant to ensure there were no order effects relating to individual stimuli in the data. The whole study took around 20 to 30 minutes depending on whether the participants replayed the audios or not. After completing the questionnaire, all participants were delivered a debrief sheet via email, allowing them to ask any questions regarding the study (see Appendix C).\r\nEthics\r\nThe study was granted ethics approval on 19/05/2022. Both a participants information sheet and consent form were delivered to all participants before the study began to indicate their rights to withdraw up to three weeks after participating in the experiment if they had changed their minds. After completion of the questionnaire, a debrief sheet was sent out to participants to allow them to raise questions regarding the study. They were also informed that their participation was confidential, with all data stored in encrypted files.\r\n\r\n\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"3048"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3049"},["text","Data/Excel.csv\r\nData/R.r"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3050"},["text","none"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3051"},["text","Keung Wang Shan"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3052"},["text","open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3053"},["text","none"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3054"},["text","english"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3055"},["text","data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"3056"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"3057"},["text","Padraic Moonaghan"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"3058"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"3059"},["text","Developmental Psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"3060"},["text","51 participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3061"},["text","Linear mixed effects modelling"]]]]]]]],["item",{"itemId":"148","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"146","order":"2"},["src","https://www.johnntowse.com/LUSTRE/files/original/055f608897628d54c7f2a243de72eb63.txt"],["authentication","849ed4bf5f0ebe3ec34bccd7856d6c63"]],["file",{"fileId":"147","order":"3"},["src","https://www.johnntowse.com/LUSTRE/files/original/0caf76688d0fd87a937daad8cef0af66.txt"],["authentication","913353fac700af17d02d4381a7540773"]],["file",{"fileId":"148","order":"4"},["src","https://www.johnntowse.com/LUSTRE/files/original/3385513f4c4cf01a4bbf9e074f9fcf10.csv"],["authentication","16a611e6b866f8552c70c6cb4c5f698a"]],["file",{"fileId":"143","order":"5"},["src","https://www.johnntowse.com/LUSTRE/files/original/8c74bde845d079abadf048bba0316db4.doc"],["authentication","c06cb4848dbba3e5b81d80f0518d47b5"]],["file",{"fileId":"149"},["src","https://www.johnntowse.com/LUSTRE/files/original/0dfdf4ec4a7cc89c6cc485920a130a43.doc"],["authentication","ebc62a1e24e476b869cb3c367f917845"]]],["collection",{"collectionId":"2"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"179"},["text","Eye tracking "]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"180"},["text","Understanding psychological processes though eye tracking"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3062"},["text","Lights, Camera, Action: Investigating Advertisement Susceptibility in Films Amongst Individuals with Parkinson’s Disease and Controls. "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3063"},["text","Elena Ball"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3064"},["text","07.09.2022"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3065"},["text","Product placement is the merging of entertainment with advertising, and its presence in our daily lives is increasing. Despite this, there is an inherent lack of consideration of its influence amongst vulnerable populations such as individuals with Parkinson’s disease (PD). Research suggests that individuals with PD have reduced inhibitory control (IC) which may drive impulsive behaviours. A concernment, therefore, is the influence that product placement may have on the purchase behaviour of individuals with PD alongside a possible propensity to partake in risky and impulsive behaviours. Thus, this study aimed to examine whether reduced IC increases the likelihood that an individual with PD will be susceptible to product placement. The study adopted an experimental approach, recruiting 20 healthy younger controls, 20 healthy older controls, and 13 individuals with mild to moderate PD to participate in watching two films containing product placement; one featuring Coca Cola and the other an Audi. A pre and post product placement questionnaire was used to measure change in purchase behaviour before and after exposure to product placement, and an antisaccade eye tracking task and a Stroop task was used to measure IC. An ANOVA indicated that IC was significantly impaired in individuals with PD compared to healthy controls.  Despite this, linear mixed effects modelling suggested that IC may not be a factor that increases the likelihood that an individual will be more susceptible to product placement. Implications of these findings are discussed relative to other clinically vulnerable populations with similar cognitive impairment symptomology, and the consequent need for future research to continue to explore product placement susceptibility amongst vulnerable populations. \r\n\r\n"]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3066"},["text","Parkinson’s Disease, Inhibitory Control, Product Placement Susceptibility \r\n\r\n"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3067"},["text","Method\r\nParticipants\r\nA voluntary sample of 54 participants were recruited, 20 healthy younger controls (YC) (16 females and four males, (Mage= 22.70, SDage= 2.42)), 20 healthy older controls of comparable age to those with Parkinson’s (OC) (females and males, (Mage= 66.85, SDage= 8.53)), and 15 adults with mild-moderate idiopathic PD (females and males, (Mage= 65.00, SDage= 7.84)). As this research area is entirely novel this sample size was modelled on comparable population studies that have explored IC (Meyer et al., 2020; Paz-Alonso et al., 2020).  YC were defined as young adults aged between 18 to 26 years old with no neurological or cognitive conditions (Stroud et al., 2015). OC were defined as adults aged between 50 to 85 years old with no neurological or cognitive conditions (Zhang et al., 2020). The participants with PD had been diagnosed with mild-moderate idiopathic PD, characterised by mild-moderate impairments of motor and cognitive functioning (DeMaagd & Philip, 2015). \r\nThe exclusion criteria for both the healthy controls and individuals with PD were those who had a diagnosis of any additional neurological or cognitive conditions other than PD. Moreover, given that visual impairments may affect the visual experience of product placement, all participants were screened for red-green colour blindness using the Ishihara test. The standardised cut off for normal vision is 15 (Rodriguez-Carmona & Barbur, 2017), therefore, participants who score 14 or less were excluded as this is indicative of the presence of red-green colour blindness. \r\nAll participants had normal or corrected-to-normal vision. The Addenbrooke’s Cognitive Examination-III (ACE) was used to screen for the presence of cognitive impairment (Bruno & Vignaga, 2019). Participants’ data was only included in analysis if participants achieved a score within the normal range (≥ 82 out of 100). Following this exclusion criteria, one PD participant’s data was removed. Research has shown saccadic eye movements to be influenced by cognitive dysfunction (Hutton, 2008; MacAskill et al., 2012), thus cognitive impairments need to be screened for as this study is measuring saccadic eye movements as a measure of IC. Subsequently, following exclusion criteria, 53 participants’ data was included within analysis.  \r\nPD participants were selected who were at a Hoehn and Yahr Stage three or less (see Table 1 for background characteristics for participants attached in the files below). The Hoehn and Yahr is used to give a summary of the laterality and severity of PD symptomology (Readman et al., 2021b). Five participants presented unilateral symptoms only (stage one), seven participants presented bilateral symptoms with no impairment of balance (stage two) and one participant presented bilateral symptoms with some postural instability but were not physically dependent (stage three). PD symptomology was assessed using the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) (Evers et al., 2019). All PD participants were tested under their usual medication regimes and were in a typical functioning ‘ON’ phase. Eight participants were taking a dopamine agonist (e.g., Ropinirole), eight participants were taking a combination drug (e.g., Madopar), six participants were taking a monoamine oxidase inhibitor (e.g., Rasagiline), and two participants were taking a Catechol-O-Methyl Transferase (e.g., Entacapone). \r\nYC were recruited through the researcher’s social network. Whereas both OC and individuals with PD were recruited established research interest databases (OC C4AR database; PD MRR PD interest database (FST2005)).  \r\nMaterials\r\nHealth and Demographic Questionnaire\r\n\tThe health and demographic questionnaire (HADQ) was developed and distributed using Qualtrics (Qualtrics, 2022), an online software that aids the process of building, distributing, and analysing surveys (Carpenter et al., 2019). The HADQ was comprised of four distinct subsections pertaining to both the participants general demographics, and more specific health related measures.\r\n\tDemographic Questions. For participant group allocation, participants were asked for their age, sex, and whether they held a diagnosis of PD. Information about participants’ age also afforded the opportunity for exploration into the possible effect of age as well as PD on product placement susceptibility.  \r\nThe Hospital Anxiety and Depression Scale (HADS). The HADS is a 14 item (7 items pertaining to anxiety and 7 items pertaining to depression) self-report assessment of anxiety and depression suitable for both psychiatric and non-psychiatric populations (Stern, 2014). All items are rated on a 4-point severity scale with a total score of 11 or more being indicative of probable anxiety and depression respectively (Caci et al., 2003; Edelstein et al., 2010). Literature has found HADS to be high in construct validity and very good internal consistency was observed when measuring anxiety (Cronbach’s α = .83) and depression (Cronbach’s α = .82) (Bjelland et al., 2002; Johnston et al., 2000; Mondolo et al., 2006). \r\n\tEdinburgh Handedness Inventory. The Edinburgh Handedness Inventory is a 10-item self-report questionnaire in which participants are asked to indicate a preference for which hand they would use when completing a range of daily activities (e.g., brushing teeth) (Robinson, 2013). Through this a handedness score ranging from 100 (strong right) to -100 (strong left) deduced.  Excellent internal consistency was observed in the 10-item Edinburgh Handedness Inventory (Cronbach’s α = .94) (Fazio et al., 2013). Previous literature suggests that handedness and eye-dominance are correlated because of hemispheric specialisation (McManus,1999; Willems et al., 2010), therefore establishing participants’ handedness was indicative of their dominant eye when measuring IC through saccadic eye movements. \r\nPD Diagnosis questions. Participants with PD were asked to provide specifics relating to their diagnosis, including years since diagnosis, years since presumed onset, and what medication, and its dosage, they are prescribed. These items were necessary to investigate whether PD severity and medication type influence product placement susceptibility.\r\nScreening Assessments\r\n\tCognitive Impairments. The Addenbrooke’s Cognitive Examination-III (ACE) is a cognitive assessment that screens for the probable presence of cognitive impairments (Noone, 2015). The ACE is comprised of 24 items that analyse attention, memory, fluency, language, and visuospatial processing (Bruno & Vignaga, 2019). Very good internal consistency was observed in the ACE (Cronbach’s α = .88) (Kan et al., 2019) and validity (Matias-Guiu et al., 2017; Takenoshita et al., 2019). \r\nVisual Impairments. The Ishihara test is a reliable (Birch, 1997) 17 item assessment for red-green colour blindness that requires participants to read aloud a set of numbers on Ishihara plates that are made up of coloured dots (Marey et al., 2015). \r\nPD Symptomology. MDS-UPDRS is a tool to measure the progression of PD symptomology (Evers et al., 2019). MDS-UPDRS is comprised of a series of tasks that assesses PD symptomology within the last week, in the domains of mentation, behaviour and mood, activities of daily life, motor abilities, and complications of therapy (Holden et al., 2018). Very good internal consistency was observed in the MDS-UPDRS (Cronbach’s α = .90) (Abdolahi et al., 2013) and valid assessment of PD symptomology severity (Goetz et al., 2008; Metman et al., 2004). \r\nMeasures of Inhibitory Control \r\n\tEye Tracking Tasks. The prosaccade and antisaccade tasks were created using Experiment Builder Software Version 1.10.1630 and the data was extracted and analysed using Data Viewer Software. Eye movements were recorded via the EyeLink Desktop 1000 at 500 Hz. Whilst recording eye movements, participants were asked to place their chin on a chin rest to reduce their head movements. Participants sat approximately 55cm away from the computer monitor (monitor run at 60Hz). \r\nFirstly, participants were asked to complete the 4-point calibration task to improve eye tracking accuracy (Pi & Shi, 2019). In this task participants were asked to follow a red target around the screen as it moved up, down, left, and right. Next, participants completed the prosaccade eye tracking task. To centralise participants’ gaze, participants were instructed to look at a white fixation target displayed on a computer screen for 1000ms. Participants were then instructed to look towards a red lateralised target that appeared on screen for 1200ms at a 4o visual angle either to the left or to the right of where the white central dot had been located, as quickly and as accurately as possible (Readman et al., 2021a). The eye tracking equipment measured participants’ saccades and latencies (how long it took for participants to fixate on the red target). A total of 16 gap trials were presented with a blank interval screen displayed for 200ms between the extinguishment of the white fixation target and the initial appearance of the red target, which resulted in a temporal gap in stimuli presentation. The prosaccade task was incorporated to ensure that alternations in participants antisaccade task performance were not due to impaired prosaccades and rather are indicative of alterations in IC. \r\nFor the antisaccade task, participants were first asked to look at a central white fixation dot for 1000ms to centralise their gaze. Participants were then asked to direct their gaze and attention focus to the opposite side of the screen to where a green lateralised target was presented for 2000ms at a 4o visual angle either to the left or to the right of where the white central dot had been located, as quickly and accurately as possible (Derakshan et al., 2009). See figure 1 above for a visual display of an antisaccade task. The eye tracking equipment measured participants’ saccades, latencies (how long it took participants to fixate their gaze to the opposite direction to the green target), and error rates (how many time participants incorrectly looked at the green target). A total of 16 gap trials were presented with a blank interval screen displayed for 200ms between the extinguishment of the white fixation target and the initial appearance of the red target, which resulted in a temporal gap in stimuli presentation. \r\n\tStroop Test. The Stroop test was conducted using PsyToolkit’s free online demonstration (PsyToolkit, 2022). Unlike in the original Stroop test whereby participants had to say the ink colour aloud (Stroop, 1935), using PsyToolkit’s online Stroop test allowed for a more accurate measurement of participant’s reaction time (ms) through pressing the key corresponding to the ink colour (Brenner & Smeets, 2018). Participants completed the Stroop test on a HP ProBook 470 G5 17.3” laptop (HP, 2022), and were sat approximately 30cm away from the laptop. Presenting the Stroop test on this laptop enabled participants to view the test on a large screen, thus improving the accessibility of the test. The colour words presented to participants were ‘red’, ‘green’, ‘yellow’, and ‘blue’.\r\n\tParticipants were instructed to press the key corresponding to the initial letter of the ink colour of the printed word presented on screen as quickly and accurately as possible. For example, the correct answer for RED would be if the participant pressed the key ‘B’ for blue. A total of 40 gap trials were presented. For each trial, a colour word was presented on screen for 2000ms. The colour word was either congruent (the colour word and the meaning are the same, e.g., GREEN) or incongruent (the colour word and the meaning is different, e.g., GREEN). There was a 100ms gap in presentation of the word in which a white cross was presented on a black interval screen. Participants’ congruent and incongruent reaction times (ms), correct Stroop score (correctly identified ink colour out of 40), and Stroop effect (incongruent reaction time (ms) minus congruent reaction time (ms)) were recorded.\r\nThe ease at which the Stroop test can be conducted in a non-laboratory environment and the simplicity at which the colour words can be translated into other languages, increases its accessibility and universality as a measure of IC (Gass et al., 2013). This assessment would, however, be an invalid measure of IC for individuals affected by colour blindness or dyslexia, limiting the populations the Stroop task can assess (Scarpina & Tagini, 2017). \r\nProduct Placement Film Clips\r\nThe incorporation of film clips containing product placement was guided by the prominent use of film clips within previous research that had investigated product placement susceptibility (Kamleitner & Jyote, 2013; Yang & Roskos-Ewoldsen, 2007). Jurassic World featuring Coca Cola and Avengers Endgame featuring Audi were chosen as they were popular films that contained product placement that both younger and older adults would recognise (Malaj, 2022), minimising the effects of familiarity. Furthermore, these two film clips were chosen as they contained product placement of products of different monetary value products. Thus, controlling for the potential effects of monetary value on product placement susceptibility (McDermott et al., 2006). \r\n\tBoth film clips were downloaded from Youtube and trimmed to last approximately one minute each to lessen the study length because of the propensity for individuals with PD to tire because of the symptomology they present with (see Appendix A for the screen shots of the two film clips). The two film clips were shown on a HP ProBook 470 G5 17.3” laptop because the large screen enhanced participants’ visual experience of product placement (HP, 2022).\r\nMeasure of Purchase Intention\r\n\tSeparate pre and post product questionnaires for each clip were made using Qualtrics (Qualtrics, 2022). To measure purchase behaviour, participants were asked how strong their preference was to buy those drink/car brands on a Likert scale of one to seven (from one = “Extremely unlikely” to seven = “Extremely likely”). Literature has found 7-point Likert scales to be a more reliable scale because it allows for more accurate and differentiated responses than smaller scales like 5-point Likert scales (Cicchetti et al., 1985; Finstad, 2010). The use of a 7-point Likert scale therefore gained a more sensitive and accurate measurement of product placement susceptibility. Both the pre and post product placement questionnaires asked participants the same questions therefore enabling us to measure if there was a change in participants’ responses prior to and after exposure to product placement (Matthes et al., 2007).\r\nDesign\r\n\tThe study used a 3 between (Participant Status: Healthy Young Controls vs. Healthy Older Controls vs. Individuals with Parkinson’s Disease) x 2 within (Product Placement Category: Drink vs. Car) mixed-subjects design.\r\nProcedure\r\nAs this study recruited a vulnerable population, the information sheet was sent to participants via email 48 hours prior to the in-person study. This afforded participants the time to ask questions or express any concerns about the study before then being sent the consent form 24 hours prior to commencing the in-person study. Once participants had read and completed the digital consent form, participants were sent the digital HADQ. The HADQ took participants approximately 10 minutes.  \r\n\tPrior to the main study, participants were screened for cognitive impairment, using the ACE, and visual impairment, using the Ishihara test. At this time the severity of Parkinson’s symptomology was assessed using the MDS-UPDRS where appropriate.\r\n\tOn completion of all pre-study screening, participants were asked to firstly complete a prosaccade eye tracking task and then an antisaccade eye tracking task which took approximately 10 minutes. \r\n\tParticipants were then asked to complete a pre product placement questionnaire and then watch a short film clip. After watching the film clip, participants were asked to complete a post product placement questionnaire. Finally, participants were asked to complete the Stroop test which took approximately five minutes to provide a further measure of IC and to act as a buffer in time. \r\n\tThis process was repeated for a second product category condition. The order of condition completion was randomly counterbalanced across participants to increase internal validity by minimising the potential for order effects (Corriero, 2017). The in-person study lasted approximately an hour for healthy controls and an hour and 30 minutes for PD. At the end of the study, participants were read and given a copy of the debrief sheet, thanked for their participation and time, and given £10 as a contribution towards travel expenses. All raw data was stored on the Lancaster University OneDrive, on a password-protected computer.\r\nData Analysis\r\n\tThe raw data from the prosaccade and antisaccade tasks were extracted using the EyeLink DataViewer Software (Version 3.2) and processed using the bespoke software SaccadeMachine (Mardanbegi et al., 2019). Noise in the dataset was removed by filtering out frames with a velocity signal greater than 1,500 deg/s or with an acceleration signal greater than 100,000 deg2/s. The EyeLink Parser was used to detect fixations and saccadic events. Saccades were extracted alongside multiple temporal and spatial variables. Trials were excluded in cases when the participant did not direct their gaze to the central fixation target. The onset of target display was a temporal window of 80-700ms, thus anticipatory saccades made prior to 80ms and excessively delayed saccades made after 700ms were removed.\r\n\tTo improve data analysis reproducibility, statistical analyses were conducted using RStudio (version 2022.09.0) (Quick, 2010). To prepare the Stroop test data for analysis, participants’ Stroop scores (correctly identified ink colour out of 40), congruent and incongruent trial reaction times (ms), and Stroop effect (incongruent trials reaction time (ms) minus the congruent trials reaction time (ms)) were downloaded from Psytoolkit into an Excel file. IC was operationalised as the Stroop effect (Kane & Engle, 2003). \r\n\tTo investigate the susceptibility to product placement, a difference in purchasing behaviour score was calculated for each product. To do so, the pre product placement ratings of the likelihood of purchasing each brand were subtracted from the post product placement ratings of the likelihood of purchasing each brand. A positive difference was indicative of participants being more likely to buy the featured product after exposure to product placement, a negative difference suggested that participants were less likely to buy the featured product, and a difference of zero indicated no change in purchase behaviour. \r\n\tFirst to confirm the assumption that is impaired in individuals with PD compared to healthy controls, three separate between-factor ANOVAs were performed to compare the main effect of group (YC, OC, and PD) on antisaccade latency, antisaccade error rate, and Stroop effect (See Appendix B for R code). A between-factor ANOVA was chosen because it compares three or more categorical groups to establish whether there is a significant difference on a dependent measure (Henson, 2015). As ANOVA results only identify a difference between groups, post hoc Tukey HSD tests for multiple comparisons were conducted to determine where the differences lie between groups (Abdi & Williams, 2010). \r\n\tTo investigate whether IC influences product placement susceptibility, a linear mixed effects modelling (LMM) was fitted. The LMM fitted incorporated difference in purchase behaviour scores (differencescore) as the outcome, and group (PD v Healthy older control v Healthy younger control) and measures of IC (antisaccade latency, antisaccade error rate, and Stroop) as the fixed effects. Given that IC is part of an individual’s executive function (Crawford et al., 2002), ACE score (as a measurement of the participants overall cognitive function; Noone, 2015) was also fitted as a fixed effect. As LMM allows for the analysis of fixed effects of independent variables, whilst also considering unexplained differences corresponding to random effects like participant variation (Baayen et al., 2008).  Random effects of both participants and product (Car or Drink) on intercepts were added (See Appendix C for R code). The LMM was fitted using the Satterthwaite adjustment method in lme4 package (Bates et al., 2014) in R (version 2022.09.0) (Quick, 2010). \r\nEthics\r\n\tThis study received ethical approval from the Psychology Department Research at Lancaster University on the 22/06/2022 and complied to The British Psychological Society’s guidelines (2014).\r\n\r\n\r\n\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"3068"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3069"},["text","Data/R.csv"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3070"},["text","Ball2022"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3071"},["text","Elena Ball"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3072"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3073"},["text","N/A"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3074"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3075"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"3076"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"3077"},["text","Dr Megan Readman"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"3078"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"3079"},["text","Psychology of Advertising"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"3080"},["text","53 Participants. 20 healthy younger controls, 20 healthy older controls, 13 individuals with mild-moderate Parkinson's disease"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3081"},["text","ANOVA\r\nLinear Mixed Effects Modelling"]]]]]]]]]